00:00:00.001 Started by upstream project "autotest-nightly" build number 4311 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3674 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.025 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.026 The recommended git tool is: git 00:00:00.026 using credential 00000000-0000-0000-0000-000000000002 00:00:00.029 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.043 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.057 Using shallow fetch with depth 1 00:00:00.057 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.057 > git --version # timeout=10 00:00:00.067 > git --version # 'git version 2.39.2' 00:00:00.067 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.089 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.089 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.314 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.324 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.336 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.336 > git config core.sparsecheckout # timeout=10 00:00:02.347 > git read-tree -mu HEAD # timeout=10 00:00:02.362 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.390 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.390 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.598 [Pipeline] Start of Pipeline 00:00:02.612 [Pipeline] library 00:00:02.613 Loading library shm_lib@master 00:00:02.613 Library shm_lib@master is cached. Copying from home. 00:00:02.630 [Pipeline] node 00:00:02.641 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.643 [Pipeline] { 00:00:02.652 [Pipeline] catchError 00:00:02.654 [Pipeline] { 00:00:02.666 [Pipeline] wrap 00:00:02.675 [Pipeline] { 00:00:02.682 [Pipeline] stage 00:00:02.683 [Pipeline] { (Prologue) 00:00:02.704 [Pipeline] echo 00:00:02.706 Node: VM-host-WFP7 00:00:02.711 [Pipeline] cleanWs 00:00:02.734 [WS-CLEANUP] Deleting project workspace... 00:00:02.734 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.742 [WS-CLEANUP] done 00:00:02.914 [Pipeline] setCustomBuildProperty 00:00:02.984 [Pipeline] httpRequest 00:00:03.304 [Pipeline] echo 00:00:03.305 Sorcerer 10.211.164.20 is alive 00:00:03.315 [Pipeline] retry 00:00:03.317 [Pipeline] { 00:00:03.328 [Pipeline] httpRequest 00:00:03.333 HttpMethod: GET 00:00:03.333 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.334 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.335 Response Code: HTTP/1.1 200 OK 00:00:03.335 Success: Status code 200 is in the accepted range: 200,404 00:00:03.336 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.482 [Pipeline] } 00:00:03.502 [Pipeline] // retry 00:00:03.510 [Pipeline] sh 00:00:03.793 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.809 [Pipeline] httpRequest 00:00:04.112 [Pipeline] echo 00:00:04.114 Sorcerer 10.211.164.20 is alive 00:00:04.123 [Pipeline] retry 00:00:04.126 [Pipeline] { 00:00:04.142 [Pipeline] httpRequest 00:00:04.146 HttpMethod: GET 00:00:04.147 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:04.147 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:04.148 Response Code: HTTP/1.1 200 OK 00:00:04.149 Success: Status code 200 is in the accepted range: 200,404 00:00:04.149 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:12.516 [Pipeline] } 00:00:12.533 [Pipeline] // retry 00:00:12.541 [Pipeline] sh 00:00:12.829 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:15.384 [Pipeline] sh 00:00:15.671 + git -C spdk log --oneline -n5 00:00:15.671 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:15.671 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:15.671 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:00:15.671 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:00:15.671 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:00:15.694 [Pipeline] writeFile 00:00:15.712 [Pipeline] sh 00:00:15.997 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:16.010 [Pipeline] sh 00:00:16.295 + cat autorun-spdk.conf 00:00:16.295 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:16.295 SPDK_RUN_ASAN=1 00:00:16.295 SPDK_RUN_UBSAN=1 00:00:16.295 SPDK_TEST_RAID=1 00:00:16.295 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:16.303 RUN_NIGHTLY=1 00:00:16.305 [Pipeline] } 00:00:16.318 [Pipeline] // stage 00:00:16.334 [Pipeline] stage 00:00:16.336 [Pipeline] { (Run VM) 00:00:16.349 [Pipeline] sh 00:00:16.633 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:16.633 + echo 'Start stage prepare_nvme.sh' 00:00:16.633 Start stage prepare_nvme.sh 00:00:16.633 + [[ -n 5 ]] 00:00:16.633 + disk_prefix=ex5 00:00:16.633 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:16.633 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:16.633 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:16.633 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:16.633 ++ SPDK_RUN_ASAN=1 00:00:16.633 ++ SPDK_RUN_UBSAN=1 00:00:16.633 ++ SPDK_TEST_RAID=1 00:00:16.634 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:16.634 ++ RUN_NIGHTLY=1 00:00:16.634 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:16.634 + nvme_files=() 00:00:16.634 + declare -A nvme_files 00:00:16.634 + backend_dir=/var/lib/libvirt/images/backends 00:00:16.634 + nvme_files['nvme.img']=5G 00:00:16.634 + nvme_files['nvme-cmb.img']=5G 00:00:16.634 + nvme_files['nvme-multi0.img']=4G 00:00:16.634 + nvme_files['nvme-multi1.img']=4G 00:00:16.634 + nvme_files['nvme-multi2.img']=4G 00:00:16.634 + nvme_files['nvme-openstack.img']=8G 00:00:16.634 + nvme_files['nvme-zns.img']=5G 00:00:16.634 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:16.634 + (( SPDK_TEST_FTL == 1 )) 00:00:16.634 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:16.634 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:16.634 + for nvme in "${!nvme_files[@]}" 00:00:16.634 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:16.634 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:16.634 + for nvme in "${!nvme_files[@]}" 00:00:16.634 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:16.634 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:16.634 + for nvme in "${!nvme_files[@]}" 00:00:16.634 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:16.634 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:16.634 + for nvme in "${!nvme_files[@]}" 00:00:16.634 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:16.634 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:16.634 + for nvme in "${!nvme_files[@]}" 00:00:16.634 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:16.634 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:16.634 + for nvme in "${!nvme_files[@]}" 00:00:16.634 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:16.634 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:16.634 + for nvme in "${!nvme_files[@]}" 00:00:16.634 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:16.894 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:16.894 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:16.894 + echo 'End stage prepare_nvme.sh' 00:00:16.894 End stage prepare_nvme.sh 00:00:16.908 [Pipeline] sh 00:00:17.216 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:17.216 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:17.216 00:00:17.216 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:17.216 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:17.216 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:17.216 HELP=0 00:00:17.216 DRY_RUN=0 00:00:17.216 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:17.216 NVME_DISKS_TYPE=nvme,nvme, 00:00:17.216 NVME_AUTO_CREATE=0 00:00:17.216 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:17.216 NVME_CMB=,, 00:00:17.216 NVME_PMR=,, 00:00:17.216 NVME_ZNS=,, 00:00:17.216 NVME_MS=,, 00:00:17.216 NVME_FDP=,, 00:00:17.216 SPDK_VAGRANT_DISTRO=fedora39 00:00:17.216 SPDK_VAGRANT_VMCPU=10 00:00:17.216 SPDK_VAGRANT_VMRAM=12288 00:00:17.216 SPDK_VAGRANT_PROVIDER=libvirt 00:00:17.216 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:17.216 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:17.216 SPDK_OPENSTACK_NETWORK=0 00:00:17.216 VAGRANT_PACKAGE_BOX=0 00:00:17.216 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:17.216 FORCE_DISTRO=true 00:00:17.216 VAGRANT_BOX_VERSION= 00:00:17.216 EXTRA_VAGRANTFILES= 00:00:17.216 NIC_MODEL=virtio 00:00:17.216 00:00:17.216 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:17.216 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:19.124 Bringing machine 'default' up with 'libvirt' provider... 00:00:19.724 ==> default: Creating image (snapshot of base box volume). 00:00:19.724 ==> default: Creating domain with the following settings... 00:00:19.724 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732760153_2aaccf4cef79da52d0de 00:00:19.724 ==> default: -- Domain type: kvm 00:00:19.724 ==> default: -- Cpus: 10 00:00:19.724 ==> default: -- Feature: acpi 00:00:19.724 ==> default: -- Feature: apic 00:00:19.724 ==> default: -- Feature: pae 00:00:19.724 ==> default: -- Memory: 12288M 00:00:19.724 ==> default: -- Memory Backing: hugepages: 00:00:19.724 ==> default: -- Management MAC: 00:00:19.724 ==> default: -- Loader: 00:00:19.724 ==> default: -- Nvram: 00:00:19.725 ==> default: -- Base box: spdk/fedora39 00:00:19.725 ==> default: -- Storage pool: default 00:00:19.725 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732760153_2aaccf4cef79da52d0de.img (20G) 00:00:19.725 ==> default: -- Volume Cache: default 00:00:19.725 ==> default: -- Kernel: 00:00:19.725 ==> default: -- Initrd: 00:00:19.725 ==> default: -- Graphics Type: vnc 00:00:19.725 ==> default: -- Graphics Port: -1 00:00:19.725 ==> default: -- Graphics IP: 127.0.0.1 00:00:19.725 ==> default: -- Graphics Password: Not defined 00:00:19.725 ==> default: -- Video Type: cirrus 00:00:19.725 ==> default: -- Video VRAM: 9216 00:00:19.725 ==> default: -- Sound Type: 00:00:19.725 ==> default: -- Keymap: en-us 00:00:19.725 ==> default: -- TPM Path: 00:00:19.725 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:19.725 ==> default: -- Command line args: 00:00:19.725 ==> default: -> value=-device, 00:00:19.725 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:19.725 ==> default: -> value=-drive, 00:00:19.725 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:19.725 ==> default: -> value=-device, 00:00:19.725 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:19.725 ==> default: -> value=-device, 00:00:19.725 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:19.725 ==> default: -> value=-drive, 00:00:19.725 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:19.725 ==> default: -> value=-device, 00:00:19.725 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:19.725 ==> default: -> value=-drive, 00:00:19.725 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:19.725 ==> default: -> value=-device, 00:00:19.725 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:19.725 ==> default: -> value=-drive, 00:00:19.725 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:19.725 ==> default: -> value=-device, 00:00:19.725 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:19.725 ==> default: Creating shared folders metadata... 00:00:19.985 ==> default: Starting domain. 00:00:21.368 ==> default: Waiting for domain to get an IP address... 00:00:39.490 ==> default: Waiting for SSH to become available... 00:00:39.490 ==> default: Configuring and enabling network interfaces... 00:00:44.777 default: SSH address: 192.168.121.196:22 00:00:44.777 default: SSH username: vagrant 00:00:44.777 default: SSH auth method: private key 00:00:47.320 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:00:55.454 ==> default: Mounting SSHFS shared folder... 00:00:57.998 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:00:57.998 ==> default: Checking Mount.. 00:00:59.913 ==> default: Folder Successfully Mounted! 00:00:59.913 ==> default: Running provisioner: file... 00:01:00.855 default: ~/.gitconfig => .gitconfig 00:01:01.426 00:01:01.426 SUCCESS! 00:01:01.426 00:01:01.426 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:01.426 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:01.426 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:01.426 00:01:01.437 [Pipeline] } 00:01:01.452 [Pipeline] // stage 00:01:01.461 [Pipeline] dir 00:01:01.462 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:01.463 [Pipeline] { 00:01:01.476 [Pipeline] catchError 00:01:01.478 [Pipeline] { 00:01:01.490 [Pipeline] sh 00:01:01.777 + vagrant ssh-config --host vagrant 00:01:01.777 + sed -ne /^Host/,$p 00:01:01.777 + tee ssh_conf 00:01:04.319 Host vagrant 00:01:04.319 HostName 192.168.121.196 00:01:04.319 User vagrant 00:01:04.319 Port 22 00:01:04.319 UserKnownHostsFile /dev/null 00:01:04.319 StrictHostKeyChecking no 00:01:04.319 PasswordAuthentication no 00:01:04.319 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:04.319 IdentitiesOnly yes 00:01:04.319 LogLevel FATAL 00:01:04.319 ForwardAgent yes 00:01:04.319 ForwardX11 yes 00:01:04.319 00:01:04.334 [Pipeline] withEnv 00:01:04.337 [Pipeline] { 00:01:04.353 [Pipeline] sh 00:01:04.643 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:04.643 source /etc/os-release 00:01:04.643 [[ -e /image.version ]] && img=$(< /image.version) 00:01:04.643 # Minimal, systemd-like check. 00:01:04.643 if [[ -e /.dockerenv ]]; then 00:01:04.643 # Clear garbage from the node's name: 00:01:04.643 # agt-er_autotest_547-896 -> autotest_547-896 00:01:04.643 # $HOSTNAME is the actual container id 00:01:04.643 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:04.643 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:04.643 # We can assume this is a mount from a host where container is running, 00:01:04.643 # so fetch its hostname to easily identify the target swarm worker. 00:01:04.643 container="$(< /etc/hostname) ($agent)" 00:01:04.643 else 00:01:04.643 # Fallback 00:01:04.643 container=$agent 00:01:04.643 fi 00:01:04.643 fi 00:01:04.643 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:04.643 00:01:04.917 [Pipeline] } 00:01:04.934 [Pipeline] // withEnv 00:01:04.943 [Pipeline] setCustomBuildProperty 00:01:04.958 [Pipeline] stage 00:01:04.961 [Pipeline] { (Tests) 00:01:04.978 [Pipeline] sh 00:01:05.286 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:05.577 [Pipeline] sh 00:01:05.862 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:06.140 [Pipeline] timeout 00:01:06.140 Timeout set to expire in 1 hr 30 min 00:01:06.142 [Pipeline] { 00:01:06.158 [Pipeline] sh 00:01:06.443 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:07.014 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:07.028 [Pipeline] sh 00:01:07.317 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:07.597 [Pipeline] sh 00:01:07.878 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:08.156 [Pipeline] sh 00:01:08.441 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:08.701 ++ readlink -f spdk_repo 00:01:08.701 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:08.701 + [[ -n /home/vagrant/spdk_repo ]] 00:01:08.701 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:08.701 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:08.701 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:08.701 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:08.701 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:08.701 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:08.701 + cd /home/vagrant/spdk_repo 00:01:08.701 + source /etc/os-release 00:01:08.701 ++ NAME='Fedora Linux' 00:01:08.701 ++ VERSION='39 (Cloud Edition)' 00:01:08.701 ++ ID=fedora 00:01:08.701 ++ VERSION_ID=39 00:01:08.701 ++ VERSION_CODENAME= 00:01:08.701 ++ PLATFORM_ID=platform:f39 00:01:08.701 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:08.701 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:08.701 ++ LOGO=fedora-logo-icon 00:01:08.701 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:08.701 ++ HOME_URL=https://fedoraproject.org/ 00:01:08.701 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:08.701 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:08.701 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:08.701 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:08.701 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:08.701 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:08.701 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:08.701 ++ SUPPORT_END=2024-11-12 00:01:08.701 ++ VARIANT='Cloud Edition' 00:01:08.701 ++ VARIANT_ID=cloud 00:01:08.701 + uname -a 00:01:08.701 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:08.701 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:09.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:09.272 Hugepages 00:01:09.272 node hugesize free / total 00:01:09.272 node0 1048576kB 0 / 0 00:01:09.272 node0 2048kB 0 / 0 00:01:09.272 00:01:09.272 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:09.272 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:09.272 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:09.272 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:09.272 + rm -f /tmp/spdk-ld-path 00:01:09.272 + source autorun-spdk.conf 00:01:09.272 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.272 ++ SPDK_RUN_ASAN=1 00:01:09.272 ++ SPDK_RUN_UBSAN=1 00:01:09.272 ++ SPDK_TEST_RAID=1 00:01:09.272 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.272 ++ RUN_NIGHTLY=1 00:01:09.272 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:09.272 + [[ -n '' ]] 00:01:09.272 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:09.532 + for M in /var/spdk/build-*-manifest.txt 00:01:09.532 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:09.532 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.532 + for M in /var/spdk/build-*-manifest.txt 00:01:09.532 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:09.532 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.532 + for M in /var/spdk/build-*-manifest.txt 00:01:09.532 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:09.532 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.532 ++ uname 00:01:09.532 + [[ Linux == \L\i\n\u\x ]] 00:01:09.532 + sudo dmesg -T 00:01:09.532 + sudo dmesg --clear 00:01:09.533 + dmesg_pid=5439 00:01:09.533 + sudo dmesg -Tw 00:01:09.533 + [[ Fedora Linux == FreeBSD ]] 00:01:09.533 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.533 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.533 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:09.533 + [[ -x /usr/src/fio-static/fio ]] 00:01:09.533 + export FIO_BIN=/usr/src/fio-static/fio 00:01:09.533 + FIO_BIN=/usr/src/fio-static/fio 00:01:09.533 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:09.533 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:09.533 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:09.533 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.533 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.533 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:09.533 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.533 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.533 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:09.533 02:16:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:09.533 02:16:43 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:09.533 02:16:43 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.533 02:16:43 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:09.533 02:16:43 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:09.533 02:16:43 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:09.533 02:16:43 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.533 02:16:43 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:09.533 02:16:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:09.533 02:16:43 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:09.793 02:16:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:09.793 02:16:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:09.793 02:16:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:09.793 02:16:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:09.793 02:16:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:09.793 02:16:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:09.793 02:16:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.793 02:16:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.793 02:16:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.793 02:16:43 -- paths/export.sh@5 -- $ export PATH 00:01:09.793 02:16:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.793 02:16:43 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:09.793 02:16:43 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:09.793 02:16:43 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732760203.XXXXXX 00:01:09.793 02:16:43 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732760203.fqIKN5 00:01:09.793 02:16:43 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:09.793 02:16:43 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:09.793 02:16:43 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:09.793 02:16:43 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:09.793 02:16:43 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:09.793 02:16:43 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:09.793 02:16:43 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:09.793 02:16:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.793 02:16:43 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:09.793 02:16:43 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:09.793 02:16:43 -- pm/common@17 -- $ local monitor 00:01:09.793 02:16:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.793 02:16:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.793 02:16:43 -- pm/common@25 -- $ sleep 1 00:01:09.793 02:16:43 -- pm/common@21 -- $ date +%s 00:01:09.793 02:16:43 -- pm/common@21 -- $ date +%s 00:01:09.794 02:16:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732760203 00:01:09.794 02:16:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732760203 00:01:09.794 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732760203_collect-vmstat.pm.log 00:01:09.794 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732760203_collect-cpu-load.pm.log 00:01:10.734 02:16:44 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:10.734 02:16:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:10.734 02:16:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:10.734 02:16:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:10.734 02:16:44 -- spdk/autobuild.sh@16 -- $ date -u 00:01:10.734 Thu Nov 28 02:16:44 AM UTC 2024 00:01:10.734 02:16:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:10.734 v25.01-pre-276-g35cd3e84d 00:01:10.734 02:16:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:10.734 02:16:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:10.734 02:16:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:10.734 02:16:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:10.734 02:16:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.734 ************************************ 00:01:10.734 START TEST asan 00:01:10.734 ************************************ 00:01:10.734 using asan 00:01:10.734 02:16:44 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:10.734 00:01:10.734 real 0m0.001s 00:01:10.734 user 0m0.000s 00:01:10.734 sys 0m0.000s 00:01:10.734 02:16:44 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:10.734 02:16:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:10.734 ************************************ 00:01:10.734 END TEST asan 00:01:10.734 ************************************ 00:01:10.994 02:16:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:10.994 02:16:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:10.994 02:16:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:10.994 02:16:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:10.994 02:16:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.994 ************************************ 00:01:10.994 START TEST ubsan 00:01:10.994 ************************************ 00:01:10.994 using ubsan 00:01:10.994 02:16:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:10.994 00:01:10.994 real 0m0.000s 00:01:10.994 user 0m0.000s 00:01:10.994 sys 0m0.000s 00:01:10.994 02:16:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:10.994 02:16:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:10.994 ************************************ 00:01:10.994 END TEST ubsan 00:01:10.994 ************************************ 00:01:10.994 02:16:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:10.994 02:16:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:10.994 02:16:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:10.994 02:16:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:10.994 02:16:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:10.994 02:16:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:10.994 02:16:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:10.994 02:16:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:10.994 02:16:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:10.994 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:10.994 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:11.564 Using 'verbs' RDMA provider 00:01:27.834 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:42.722 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:43.291 Creating mk/config.mk...done. 00:01:43.291 Creating mk/cc.flags.mk...done. 00:01:43.291 Type 'make' to build. 00:01:43.291 02:17:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:43.291 02:17:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:43.291 02:17:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:43.291 02:17:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.291 ************************************ 00:01:43.291 START TEST make 00:01:43.291 ************************************ 00:01:43.291 02:17:16 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:43.861 make[1]: Nothing to be done for 'all'. 00:01:53.853 The Meson build system 00:01:53.853 Version: 1.5.0 00:01:53.853 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:53.853 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:53.853 Build type: native build 00:01:53.853 Program cat found: YES (/usr/bin/cat) 00:01:53.853 Project name: DPDK 00:01:53.853 Project version: 24.03.0 00:01:53.853 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:53.853 C linker for the host machine: cc ld.bfd 2.40-14 00:01:53.853 Host machine cpu family: x86_64 00:01:53.853 Host machine cpu: x86_64 00:01:53.853 Message: ## Building in Developer Mode ## 00:01:53.853 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:53.853 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:53.853 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:53.853 Program python3 found: YES (/usr/bin/python3) 00:01:53.853 Program cat found: YES (/usr/bin/cat) 00:01:53.853 Compiler for C supports arguments -march=native: YES 00:01:53.853 Checking for size of "void *" : 8 00:01:53.853 Checking for size of "void *" : 8 (cached) 00:01:53.853 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:53.853 Library m found: YES 00:01:53.853 Library numa found: YES 00:01:53.853 Has header "numaif.h" : YES 00:01:53.853 Library fdt found: NO 00:01:53.853 Library execinfo found: NO 00:01:53.853 Has header "execinfo.h" : YES 00:01:53.853 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:53.853 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:53.853 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:53.853 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:53.853 Run-time dependency openssl found: YES 3.1.1 00:01:53.853 Run-time dependency libpcap found: YES 1.10.4 00:01:53.853 Has header "pcap.h" with dependency libpcap: YES 00:01:53.853 Compiler for C supports arguments -Wcast-qual: YES 00:01:53.853 Compiler for C supports arguments -Wdeprecated: YES 00:01:53.853 Compiler for C supports arguments -Wformat: YES 00:01:53.853 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:53.853 Compiler for C supports arguments -Wformat-security: NO 00:01:53.853 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.853 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:53.853 Compiler for C supports arguments -Wnested-externs: YES 00:01:53.853 Compiler for C supports arguments -Wold-style-definition: YES 00:01:53.853 Compiler for C supports arguments -Wpointer-arith: YES 00:01:53.853 Compiler for C supports arguments -Wsign-compare: YES 00:01:53.853 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:53.853 Compiler for C supports arguments -Wundef: YES 00:01:53.853 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.853 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:53.853 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:53.853 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.853 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:53.853 Program objdump found: YES (/usr/bin/objdump) 00:01:53.853 Compiler for C supports arguments -mavx512f: YES 00:01:53.853 Checking if "AVX512 checking" compiles: YES 00:01:53.853 Fetching value of define "__SSE4_2__" : 1 00:01:53.853 Fetching value of define "__AES__" : 1 00:01:53.853 Fetching value of define "__AVX__" : 1 00:01:53.853 Fetching value of define "__AVX2__" : 1 00:01:53.853 Fetching value of define "__AVX512BW__" : 1 00:01:53.853 Fetching value of define "__AVX512CD__" : 1 00:01:53.853 Fetching value of define "__AVX512DQ__" : 1 00:01:53.853 Fetching value of define "__AVX512F__" : 1 00:01:53.853 Fetching value of define "__AVX512VL__" : 1 00:01:53.853 Fetching value of define "__PCLMUL__" : 1 00:01:53.853 Fetching value of define "__RDRND__" : 1 00:01:53.853 Fetching value of define "__RDSEED__" : 1 00:01:53.853 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:53.853 Fetching value of define "__znver1__" : (undefined) 00:01:53.853 Fetching value of define "__znver2__" : (undefined) 00:01:53.853 Fetching value of define "__znver3__" : (undefined) 00:01:53.853 Fetching value of define "__znver4__" : (undefined) 00:01:53.853 Library asan found: YES 00:01:53.853 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:53.853 Message: lib/log: Defining dependency "log" 00:01:53.853 Message: lib/kvargs: Defining dependency "kvargs" 00:01:53.853 Message: lib/telemetry: Defining dependency "telemetry" 00:01:53.854 Library rt found: YES 00:01:53.854 Checking for function "getentropy" : NO 00:01:53.854 Message: lib/eal: Defining dependency "eal" 00:01:53.854 Message: lib/ring: Defining dependency "ring" 00:01:53.854 Message: lib/rcu: Defining dependency "rcu" 00:01:53.854 Message: lib/mempool: Defining dependency "mempool" 00:01:53.854 Message: lib/mbuf: Defining dependency "mbuf" 00:01:53.854 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:53.854 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:53.854 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:53.854 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:53.854 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:53.854 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:53.854 Compiler for C supports arguments -mpclmul: YES 00:01:53.854 Compiler for C supports arguments -maes: YES 00:01:53.854 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.854 Compiler for C supports arguments -mavx512bw: YES 00:01:53.854 Compiler for C supports arguments -mavx512dq: YES 00:01:53.854 Compiler for C supports arguments -mavx512vl: YES 00:01:53.854 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:53.854 Compiler for C supports arguments -mavx2: YES 00:01:53.854 Compiler for C supports arguments -mavx: YES 00:01:53.854 Message: lib/net: Defining dependency "net" 00:01:53.854 Message: lib/meter: Defining dependency "meter" 00:01:53.854 Message: lib/ethdev: Defining dependency "ethdev" 00:01:53.854 Message: lib/pci: Defining dependency "pci" 00:01:53.854 Message: lib/cmdline: Defining dependency "cmdline" 00:01:53.854 Message: lib/hash: Defining dependency "hash" 00:01:53.854 Message: lib/timer: Defining dependency "timer" 00:01:53.854 Message: lib/compressdev: Defining dependency "compressdev" 00:01:53.854 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:53.854 Message: lib/dmadev: Defining dependency "dmadev" 00:01:53.854 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:53.854 Message: lib/power: Defining dependency "power" 00:01:53.854 Message: lib/reorder: Defining dependency "reorder" 00:01:53.854 Message: lib/security: Defining dependency "security" 00:01:53.854 Has header "linux/userfaultfd.h" : YES 00:01:53.854 Has header "linux/vduse.h" : YES 00:01:53.854 Message: lib/vhost: Defining dependency "vhost" 00:01:53.854 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.854 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.854 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.854 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.854 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:53.854 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:53.854 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:53.854 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:53.854 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:53.854 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:53.854 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:53.854 Configuring doxy-api-html.conf using configuration 00:01:53.854 Configuring doxy-api-man.conf using configuration 00:01:53.854 Program mandb found: YES (/usr/bin/mandb) 00:01:53.854 Program sphinx-build found: NO 00:01:53.854 Configuring rte_build_config.h using configuration 00:01:53.854 Message: 00:01:53.854 ================= 00:01:53.854 Applications Enabled 00:01:53.854 ================= 00:01:53.854 00:01:53.854 apps: 00:01:53.854 00:01:53.854 00:01:53.854 Message: 00:01:53.854 ================= 00:01:53.854 Libraries Enabled 00:01:53.854 ================= 00:01:53.854 00:01:53.854 libs: 00:01:53.854 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.854 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:53.854 cryptodev, dmadev, power, reorder, security, vhost, 00:01:53.854 00:01:53.854 Message: 00:01:53.854 =============== 00:01:53.854 Drivers Enabled 00:01:53.854 =============== 00:01:53.854 00:01:53.854 common: 00:01:53.854 00:01:53.854 bus: 00:01:53.854 pci, vdev, 00:01:53.854 mempool: 00:01:53.854 ring, 00:01:53.854 dma: 00:01:53.854 00:01:53.854 net: 00:01:53.854 00:01:53.854 crypto: 00:01:53.854 00:01:53.854 compress: 00:01:53.854 00:01:53.854 vdpa: 00:01:53.854 00:01:53.854 00:01:53.854 Message: 00:01:53.854 ================= 00:01:53.854 Content Skipped 00:01:53.854 ================= 00:01:53.854 00:01:53.854 apps: 00:01:53.854 dumpcap: explicitly disabled via build config 00:01:53.854 graph: explicitly disabled via build config 00:01:53.854 pdump: explicitly disabled via build config 00:01:53.854 proc-info: explicitly disabled via build config 00:01:53.854 test-acl: explicitly disabled via build config 00:01:53.854 test-bbdev: explicitly disabled via build config 00:01:53.854 test-cmdline: explicitly disabled via build config 00:01:53.854 test-compress-perf: explicitly disabled via build config 00:01:53.854 test-crypto-perf: explicitly disabled via build config 00:01:53.854 test-dma-perf: explicitly disabled via build config 00:01:53.854 test-eventdev: explicitly disabled via build config 00:01:53.854 test-fib: explicitly disabled via build config 00:01:53.854 test-flow-perf: explicitly disabled via build config 00:01:53.854 test-gpudev: explicitly disabled via build config 00:01:53.854 test-mldev: explicitly disabled via build config 00:01:53.854 test-pipeline: explicitly disabled via build config 00:01:53.854 test-pmd: explicitly disabled via build config 00:01:53.854 test-regex: explicitly disabled via build config 00:01:53.854 test-sad: explicitly disabled via build config 00:01:53.854 test-security-perf: explicitly disabled via build config 00:01:53.854 00:01:53.854 libs: 00:01:53.854 argparse: explicitly disabled via build config 00:01:53.854 metrics: explicitly disabled via build config 00:01:53.854 acl: explicitly disabled via build config 00:01:53.854 bbdev: explicitly disabled via build config 00:01:53.854 bitratestats: explicitly disabled via build config 00:01:53.854 bpf: explicitly disabled via build config 00:01:53.854 cfgfile: explicitly disabled via build config 00:01:53.854 distributor: explicitly disabled via build config 00:01:53.854 efd: explicitly disabled via build config 00:01:53.854 eventdev: explicitly disabled via build config 00:01:53.854 dispatcher: explicitly disabled via build config 00:01:53.854 gpudev: explicitly disabled via build config 00:01:53.854 gro: explicitly disabled via build config 00:01:53.854 gso: explicitly disabled via build config 00:01:53.854 ip_frag: explicitly disabled via build config 00:01:53.854 jobstats: explicitly disabled via build config 00:01:53.854 latencystats: explicitly disabled via build config 00:01:53.854 lpm: explicitly disabled via build config 00:01:53.854 member: explicitly disabled via build config 00:01:53.854 pcapng: explicitly disabled via build config 00:01:53.854 rawdev: explicitly disabled via build config 00:01:53.854 regexdev: explicitly disabled via build config 00:01:53.854 mldev: explicitly disabled via build config 00:01:53.854 rib: explicitly disabled via build config 00:01:53.854 sched: explicitly disabled via build config 00:01:53.854 stack: explicitly disabled via build config 00:01:53.854 ipsec: explicitly disabled via build config 00:01:53.854 pdcp: explicitly disabled via build config 00:01:53.854 fib: explicitly disabled via build config 00:01:53.854 port: explicitly disabled via build config 00:01:53.854 pdump: explicitly disabled via build config 00:01:53.854 table: explicitly disabled via build config 00:01:53.854 pipeline: explicitly disabled via build config 00:01:53.854 graph: explicitly disabled via build config 00:01:53.855 node: explicitly disabled via build config 00:01:53.855 00:01:53.855 drivers: 00:01:53.855 common/cpt: not in enabled drivers build config 00:01:53.855 common/dpaax: not in enabled drivers build config 00:01:53.855 common/iavf: not in enabled drivers build config 00:01:53.855 common/idpf: not in enabled drivers build config 00:01:53.855 common/ionic: not in enabled drivers build config 00:01:53.855 common/mvep: not in enabled drivers build config 00:01:53.855 common/octeontx: not in enabled drivers build config 00:01:53.855 bus/auxiliary: not in enabled drivers build config 00:01:53.855 bus/cdx: not in enabled drivers build config 00:01:53.855 bus/dpaa: not in enabled drivers build config 00:01:53.855 bus/fslmc: not in enabled drivers build config 00:01:53.855 bus/ifpga: not in enabled drivers build config 00:01:53.855 bus/platform: not in enabled drivers build config 00:01:53.855 bus/uacce: not in enabled drivers build config 00:01:53.855 bus/vmbus: not in enabled drivers build config 00:01:53.855 common/cnxk: not in enabled drivers build config 00:01:53.855 common/mlx5: not in enabled drivers build config 00:01:53.855 common/nfp: not in enabled drivers build config 00:01:53.855 common/nitrox: not in enabled drivers build config 00:01:53.855 common/qat: not in enabled drivers build config 00:01:53.855 common/sfc_efx: not in enabled drivers build config 00:01:53.855 mempool/bucket: not in enabled drivers build config 00:01:53.855 mempool/cnxk: not in enabled drivers build config 00:01:53.855 mempool/dpaa: not in enabled drivers build config 00:01:53.855 mempool/dpaa2: not in enabled drivers build config 00:01:53.855 mempool/octeontx: not in enabled drivers build config 00:01:53.855 mempool/stack: not in enabled drivers build config 00:01:53.855 dma/cnxk: not in enabled drivers build config 00:01:53.855 dma/dpaa: not in enabled drivers build config 00:01:53.855 dma/dpaa2: not in enabled drivers build config 00:01:53.855 dma/hisilicon: not in enabled drivers build config 00:01:53.855 dma/idxd: not in enabled drivers build config 00:01:53.855 dma/ioat: not in enabled drivers build config 00:01:53.855 dma/skeleton: not in enabled drivers build config 00:01:53.855 net/af_packet: not in enabled drivers build config 00:01:53.855 net/af_xdp: not in enabled drivers build config 00:01:53.855 net/ark: not in enabled drivers build config 00:01:53.855 net/atlantic: not in enabled drivers build config 00:01:53.855 net/avp: not in enabled drivers build config 00:01:53.855 net/axgbe: not in enabled drivers build config 00:01:53.855 net/bnx2x: not in enabled drivers build config 00:01:53.855 net/bnxt: not in enabled drivers build config 00:01:53.855 net/bonding: not in enabled drivers build config 00:01:53.855 net/cnxk: not in enabled drivers build config 00:01:53.855 net/cpfl: not in enabled drivers build config 00:01:53.855 net/cxgbe: not in enabled drivers build config 00:01:53.855 net/dpaa: not in enabled drivers build config 00:01:53.855 net/dpaa2: not in enabled drivers build config 00:01:53.855 net/e1000: not in enabled drivers build config 00:01:53.855 net/ena: not in enabled drivers build config 00:01:53.855 net/enetc: not in enabled drivers build config 00:01:53.855 net/enetfec: not in enabled drivers build config 00:01:53.855 net/enic: not in enabled drivers build config 00:01:53.855 net/failsafe: not in enabled drivers build config 00:01:53.855 net/fm10k: not in enabled drivers build config 00:01:53.855 net/gve: not in enabled drivers build config 00:01:53.855 net/hinic: not in enabled drivers build config 00:01:53.855 net/hns3: not in enabled drivers build config 00:01:53.855 net/i40e: not in enabled drivers build config 00:01:53.855 net/iavf: not in enabled drivers build config 00:01:53.855 net/ice: not in enabled drivers build config 00:01:53.855 net/idpf: not in enabled drivers build config 00:01:53.855 net/igc: not in enabled drivers build config 00:01:53.855 net/ionic: not in enabled drivers build config 00:01:53.855 net/ipn3ke: not in enabled drivers build config 00:01:53.855 net/ixgbe: not in enabled drivers build config 00:01:53.855 net/mana: not in enabled drivers build config 00:01:53.855 net/memif: not in enabled drivers build config 00:01:53.855 net/mlx4: not in enabled drivers build config 00:01:53.855 net/mlx5: not in enabled drivers build config 00:01:53.855 net/mvneta: not in enabled drivers build config 00:01:53.855 net/mvpp2: not in enabled drivers build config 00:01:53.855 net/netvsc: not in enabled drivers build config 00:01:53.855 net/nfb: not in enabled drivers build config 00:01:53.855 net/nfp: not in enabled drivers build config 00:01:53.855 net/ngbe: not in enabled drivers build config 00:01:53.855 net/null: not in enabled drivers build config 00:01:53.855 net/octeontx: not in enabled drivers build config 00:01:53.855 net/octeon_ep: not in enabled drivers build config 00:01:53.855 net/pcap: not in enabled drivers build config 00:01:53.855 net/pfe: not in enabled drivers build config 00:01:53.855 net/qede: not in enabled drivers build config 00:01:53.855 net/ring: not in enabled drivers build config 00:01:53.855 net/sfc: not in enabled drivers build config 00:01:53.855 net/softnic: not in enabled drivers build config 00:01:53.855 net/tap: not in enabled drivers build config 00:01:53.855 net/thunderx: not in enabled drivers build config 00:01:53.855 net/txgbe: not in enabled drivers build config 00:01:53.855 net/vdev_netvsc: not in enabled drivers build config 00:01:53.855 net/vhost: not in enabled drivers build config 00:01:53.855 net/virtio: not in enabled drivers build config 00:01:53.855 net/vmxnet3: not in enabled drivers build config 00:01:53.855 raw/*: missing internal dependency, "rawdev" 00:01:53.855 crypto/armv8: not in enabled drivers build config 00:01:53.855 crypto/bcmfs: not in enabled drivers build config 00:01:53.855 crypto/caam_jr: not in enabled drivers build config 00:01:53.855 crypto/ccp: not in enabled drivers build config 00:01:53.855 crypto/cnxk: not in enabled drivers build config 00:01:53.855 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.855 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.855 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.855 crypto/mlx5: not in enabled drivers build config 00:01:53.855 crypto/mvsam: not in enabled drivers build config 00:01:53.855 crypto/nitrox: not in enabled drivers build config 00:01:53.855 crypto/null: not in enabled drivers build config 00:01:53.855 crypto/octeontx: not in enabled drivers build config 00:01:53.855 crypto/openssl: not in enabled drivers build config 00:01:53.855 crypto/scheduler: not in enabled drivers build config 00:01:53.855 crypto/uadk: not in enabled drivers build config 00:01:53.855 crypto/virtio: not in enabled drivers build config 00:01:53.855 compress/isal: not in enabled drivers build config 00:01:53.855 compress/mlx5: not in enabled drivers build config 00:01:53.855 compress/nitrox: not in enabled drivers build config 00:01:53.855 compress/octeontx: not in enabled drivers build config 00:01:53.855 compress/zlib: not in enabled drivers build config 00:01:53.855 regex/*: missing internal dependency, "regexdev" 00:01:53.855 ml/*: missing internal dependency, "mldev" 00:01:53.855 vdpa/ifc: not in enabled drivers build config 00:01:53.855 vdpa/mlx5: not in enabled drivers build config 00:01:53.855 vdpa/nfp: not in enabled drivers build config 00:01:53.855 vdpa/sfc: not in enabled drivers build config 00:01:53.855 event/*: missing internal dependency, "eventdev" 00:01:53.855 baseband/*: missing internal dependency, "bbdev" 00:01:53.855 gpu/*: missing internal dependency, "gpudev" 00:01:53.855 00:01:53.855 00:01:53.855 Build targets in project: 85 00:01:53.855 00:01:53.855 DPDK 24.03.0 00:01:53.855 00:01:53.855 User defined options 00:01:53.855 buildtype : debug 00:01:53.855 default_library : shared 00:01:53.855 libdir : lib 00:01:53.855 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:53.855 b_sanitize : address 00:01:53.855 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:53.855 c_link_args : 00:01:53.855 cpu_instruction_set: native 00:01:53.856 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:53.856 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:53.856 enable_docs : false 00:01:53.856 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:53.856 enable_kmods : false 00:01:53.856 max_lcores : 128 00:01:53.856 tests : false 00:01:53.856 00:01:53.856 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.856 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:54.116 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.116 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.116 [3/268] Linking static target lib/librte_kvargs.a 00:01:54.116 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.116 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.116 [6/268] Linking static target lib/librte_log.a 00:01:54.376 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:54.376 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:54.376 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.376 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.376 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:54.376 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:54.637 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:54.637 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.637 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:54.637 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:54.637 [17/268] Linking static target lib/librte_telemetry.a 00:01:54.637 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:54.897 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.897 [20/268] Linking target lib/librte_log.so.24.1 00:01:55.157 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.157 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.157 [23/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.157 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.157 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:55.157 [26/268] Linking target lib/librte_kvargs.so.24.1 00:01:55.157 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.157 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.157 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.157 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.417 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:55.417 [32/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:55.417 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.417 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.677 [35/268] Linking target lib/librte_telemetry.so.24.1 00:01:55.677 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.677 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:55.677 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.677 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.677 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:55.677 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.677 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.677 [43/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:55.677 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:55.937 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:55.937 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.937 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.937 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:56.197 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:56.197 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.197 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:56.197 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:56.457 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:56.457 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:56.457 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:56.457 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:56.457 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:56.457 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:56.718 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:56.718 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:56.718 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.718 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:56.718 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:56.718 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:56.718 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:56.978 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:56.978 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:57.237 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:57.237 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.238 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:57.238 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.238 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:57.238 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:57.238 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.498 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.498 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:57.498 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:57.498 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:57.498 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:57.758 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:57.758 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.758 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.758 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:57.758 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.017 [85/268] Linking static target lib/librte_eal.a 00:01:58.017 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.017 [87/268] Linking static target lib/librte_ring.a 00:01:58.017 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.276 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.276 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.276 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.276 [92/268] Linking static target lib/librte_rcu.a 00:01:58.276 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.276 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.276 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.276 [96/268] Linking static target lib/librte_mempool.a 00:01:58.535 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.535 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.535 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.535 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.794 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.794 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.794 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.794 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.794 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:59.053 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:59.053 [107/268] Linking static target lib/librte_net.a 00:01:59.053 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:59.053 [109/268] Linking static target lib/librte_meter.a 00:01:59.053 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.053 [111/268] Linking static target lib/librte_mbuf.a 00:01:59.310 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:59.310 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:59.310 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:59.310 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:59.310 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.310 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.310 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.569 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:59.829 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.829 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:00.088 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.088 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:00.088 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:00.088 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:00.348 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.348 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.348 [128/268] Linking static target lib/librte_pci.a 00:02:00.348 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:00.348 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:00.348 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:00.348 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:00.608 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:00.608 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.608 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:00.608 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:00.608 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.608 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.608 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.608 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.608 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.608 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.868 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:00.868 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:00.868 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.868 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:00.868 [147/268] Linking static target lib/librte_cmdline.a 00:02:01.128 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:01.128 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:01.128 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.388 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.388 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:01.388 [153/268] Linking static target lib/librte_timer.a 00:02:01.388 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:01.388 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.388 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.647 [157/268] Linking static target lib/librte_ethdev.a 00:02:01.648 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.648 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:01.648 [160/268] Linking static target lib/librte_hash.a 00:02:01.908 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:01.908 [162/268] Linking static target lib/librte_compressdev.a 00:02:01.908 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.908 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.908 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:01.908 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:02.167 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:02.167 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:02.167 [169/268] Linking static target lib/librte_dmadev.a 00:02:02.426 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:02.426 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:02.426 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:02.426 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.685 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:02.685 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.945 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.945 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:02.945 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:02.945 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.945 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:02.945 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:02.945 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:02.945 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:03.204 [184/268] Linking static target lib/librte_power.a 00:02:03.204 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:03.204 [186/268] Linking static target lib/librte_cryptodev.a 00:02:03.464 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:03.464 [188/268] Linking static target lib/librte_reorder.a 00:02:03.464 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:03.724 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:03.724 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:03.724 [192/268] Linking static target lib/librte_security.a 00:02:03.724 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:03.983 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:03.983 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.243 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.243 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.503 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:04.503 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:04.503 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:04.503 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.503 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:04.763 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:04.763 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:05.022 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:05.022 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:05.022 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:05.022 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:05.022 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:05.022 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:05.282 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:05.282 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:05.282 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.282 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.282 [215/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.282 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:05.282 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.282 [218/268] Linking static target drivers/librte_bus_pci.a 00:02:05.282 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.541 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.541 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.541 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.801 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.801 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.801 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.802 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:05.802 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.182 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:08.562 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.562 [230/268] Linking target lib/librte_eal.so.24.1 00:02:08.562 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:08.562 [232/268] Linking target lib/librte_timer.so.24.1 00:02:08.562 [233/268] Linking target lib/librte_ring.so.24.1 00:02:08.562 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:08.828 [235/268] Linking target lib/librte_meter.so.24.1 00:02:08.828 [236/268] Linking target lib/librte_pci.so.24.1 00:02:08.828 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:08.828 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:08.828 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:08.828 [240/268] Linking target lib/librte_mempool.so.24.1 00:02:08.828 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:08.828 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:08.828 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:08.828 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:08.828 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:08.828 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:08.828 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:09.101 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:09.101 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:09.101 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:09.101 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:09.101 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:09.101 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:09.101 [254/268] Linking target lib/librte_net.so.24.1 00:02:09.367 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:09.368 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:09.368 [257/268] Linking target lib/librte_security.so.24.1 00:02:09.368 [258/268] Linking target lib/librte_hash.so.24.1 00:02:09.368 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:09.627 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:09.886 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.145 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:10.145 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:10.145 [264/268] Linking target lib/librte_power.so.24.1 00:02:10.715 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:10.715 [266/268] Linking static target lib/librte_vhost.a 00:02:13.253 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.253 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:13.253 INFO: autodetecting backend as ninja 00:02:13.253 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:28.146 CC lib/ut/ut.o 00:02:28.146 CC lib/ut_mock/mock.o 00:02:28.146 CC lib/log/log.o 00:02:28.146 CC lib/log/log_flags.o 00:02:28.146 CC lib/log/log_deprecated.o 00:02:28.146 LIB libspdk_ut_mock.a 00:02:28.146 LIB libspdk_ut.a 00:02:28.146 LIB libspdk_log.a 00:02:28.146 SO libspdk_ut.so.2.0 00:02:28.146 SO libspdk_ut_mock.so.6.0 00:02:28.146 SO libspdk_log.so.7.1 00:02:28.146 SYMLINK libspdk_ut_mock.so 00:02:28.146 SYMLINK libspdk_ut.so 00:02:28.146 SYMLINK libspdk_log.so 00:02:28.406 CXX lib/trace_parser/trace.o 00:02:28.673 CC lib/util/base64.o 00:02:28.673 CC lib/util/bit_array.o 00:02:28.673 CC lib/util/cpuset.o 00:02:28.673 CC lib/util/crc16.o 00:02:28.673 CC lib/util/crc32.o 00:02:28.673 CC lib/util/crc32c.o 00:02:28.673 CC lib/dma/dma.o 00:02:28.673 CC lib/ioat/ioat.o 00:02:28.673 CC lib/vfio_user/host/vfio_user_pci.o 00:02:28.673 CC lib/util/crc32_ieee.o 00:02:28.673 CC lib/util/crc64.o 00:02:28.673 CC lib/util/dif.o 00:02:28.673 CC lib/vfio_user/host/vfio_user.o 00:02:28.673 CC lib/util/fd.o 00:02:28.673 LIB libspdk_dma.a 00:02:28.673 CC lib/util/fd_group.o 00:02:28.673 SO libspdk_dma.so.5.0 00:02:28.673 CC lib/util/file.o 00:02:28.673 CC lib/util/hexlify.o 00:02:28.954 SYMLINK libspdk_dma.so 00:02:28.954 CC lib/util/iov.o 00:02:28.954 LIB libspdk_ioat.a 00:02:28.954 SO libspdk_ioat.so.7.0 00:02:28.954 CC lib/util/math.o 00:02:28.954 CC lib/util/net.o 00:02:28.954 LIB libspdk_vfio_user.a 00:02:28.954 CC lib/util/pipe.o 00:02:28.954 CC lib/util/strerror_tls.o 00:02:28.954 SO libspdk_vfio_user.so.5.0 00:02:28.954 SYMLINK libspdk_ioat.so 00:02:28.954 CC lib/util/string.o 00:02:28.954 SYMLINK libspdk_vfio_user.so 00:02:28.954 CC lib/util/uuid.o 00:02:28.954 CC lib/util/xor.o 00:02:28.954 CC lib/util/zipf.o 00:02:28.954 CC lib/util/md5.o 00:02:29.522 LIB libspdk_util.a 00:02:29.522 LIB libspdk_trace_parser.a 00:02:29.522 SO libspdk_util.so.10.1 00:02:29.522 SO libspdk_trace_parser.so.6.0 00:02:29.522 SYMLINK libspdk_util.so 00:02:29.522 SYMLINK libspdk_trace_parser.so 00:02:29.781 CC lib/rdma_utils/rdma_utils.o 00:02:29.781 CC lib/conf/conf.o 00:02:29.781 CC lib/vmd/vmd.o 00:02:29.781 CC lib/vmd/led.o 00:02:29.781 CC lib/idxd/idxd.o 00:02:29.781 CC lib/json/json_util.o 00:02:29.781 CC lib/json/json_write.o 00:02:29.781 CC lib/json/json_parse.o 00:02:29.781 CC lib/idxd/idxd_user.o 00:02:29.781 CC lib/env_dpdk/env.o 00:02:30.041 CC lib/env_dpdk/memory.o 00:02:30.041 LIB libspdk_conf.a 00:02:30.041 SO libspdk_conf.so.6.0 00:02:30.041 CC lib/idxd/idxd_kernel.o 00:02:30.041 CC lib/env_dpdk/pci.o 00:02:30.041 LIB libspdk_rdma_utils.a 00:02:30.041 CC lib/env_dpdk/init.o 00:02:30.041 SYMLINK libspdk_conf.so 00:02:30.041 CC lib/env_dpdk/threads.o 00:02:30.041 SO libspdk_rdma_utils.so.1.0 00:02:30.041 LIB libspdk_json.a 00:02:30.041 SYMLINK libspdk_rdma_utils.so 00:02:30.041 CC lib/env_dpdk/pci_ioat.o 00:02:30.301 SO libspdk_json.so.6.0 00:02:30.301 CC lib/env_dpdk/pci_virtio.o 00:02:30.301 SYMLINK libspdk_json.so 00:02:30.301 CC lib/env_dpdk/pci_vmd.o 00:02:30.301 CC lib/env_dpdk/pci_idxd.o 00:02:30.301 CC lib/env_dpdk/pci_event.o 00:02:30.301 CC lib/rdma_provider/common.o 00:02:30.301 CC lib/env_dpdk/sigbus_handler.o 00:02:30.301 CC lib/env_dpdk/pci_dpdk.o 00:02:30.301 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:30.561 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:30.561 LIB libspdk_idxd.a 00:02:30.561 CC lib/jsonrpc/jsonrpc_server.o 00:02:30.561 SO libspdk_idxd.so.12.1 00:02:30.561 LIB libspdk_vmd.a 00:02:30.561 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:30.561 SO libspdk_vmd.so.6.0 00:02:30.561 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:30.561 SYMLINK libspdk_idxd.so 00:02:30.561 CC lib/jsonrpc/jsonrpc_client.o 00:02:30.561 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:30.561 SYMLINK libspdk_vmd.so 00:02:30.821 LIB libspdk_rdma_provider.a 00:02:30.821 SO libspdk_rdma_provider.so.7.0 00:02:30.821 LIB libspdk_jsonrpc.a 00:02:30.821 SYMLINK libspdk_rdma_provider.so 00:02:30.821 SO libspdk_jsonrpc.so.6.0 00:02:30.821 SYMLINK libspdk_jsonrpc.so 00:02:31.391 CC lib/rpc/rpc.o 00:02:31.391 LIB libspdk_env_dpdk.a 00:02:31.391 SO libspdk_env_dpdk.so.15.1 00:02:31.650 LIB libspdk_rpc.a 00:02:31.650 SYMLINK libspdk_env_dpdk.so 00:02:31.650 SO libspdk_rpc.so.6.0 00:02:31.650 SYMLINK libspdk_rpc.so 00:02:31.909 CC lib/notify/notify.o 00:02:31.909 CC lib/notify/notify_rpc.o 00:02:31.909 CC lib/keyring/keyring_rpc.o 00:02:31.909 CC lib/keyring/keyring.o 00:02:31.909 CC lib/trace/trace.o 00:02:31.909 CC lib/trace/trace_flags.o 00:02:31.909 CC lib/trace/trace_rpc.o 00:02:32.168 LIB libspdk_notify.a 00:02:32.168 SO libspdk_notify.so.6.0 00:02:32.168 SYMLINK libspdk_notify.so 00:02:32.168 LIB libspdk_keyring.a 00:02:32.427 LIB libspdk_trace.a 00:02:32.427 SO libspdk_keyring.so.2.0 00:02:32.427 SO libspdk_trace.so.11.0 00:02:32.427 SYMLINK libspdk_keyring.so 00:02:32.427 SYMLINK libspdk_trace.so 00:02:32.686 CC lib/thread/thread.o 00:02:32.686 CC lib/thread/iobuf.o 00:02:32.946 CC lib/sock/sock.o 00:02:32.946 CC lib/sock/sock_rpc.o 00:02:33.205 LIB libspdk_sock.a 00:02:33.205 SO libspdk_sock.so.10.0 00:02:33.465 SYMLINK libspdk_sock.so 00:02:33.725 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:33.725 CC lib/nvme/nvme_ctrlr.o 00:02:33.725 CC lib/nvme/nvme_fabric.o 00:02:33.725 CC lib/nvme/nvme_ns_cmd.o 00:02:33.725 CC lib/nvme/nvme_ns.o 00:02:33.725 CC lib/nvme/nvme_pcie_common.o 00:02:33.725 CC lib/nvme/nvme_pcie.o 00:02:33.725 CC lib/nvme/nvme.o 00:02:33.725 CC lib/nvme/nvme_qpair.o 00:02:34.294 LIB libspdk_thread.a 00:02:34.294 CC lib/nvme/nvme_quirks.o 00:02:34.294 SO libspdk_thread.so.11.0 00:02:34.294 CC lib/nvme/nvme_transport.o 00:02:34.553 SYMLINK libspdk_thread.so 00:02:34.553 CC lib/nvme/nvme_discovery.o 00:02:34.553 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:34.553 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:34.553 CC lib/nvme/nvme_tcp.o 00:02:34.553 CC lib/nvme/nvme_opal.o 00:02:34.553 CC lib/nvme/nvme_io_msg.o 00:02:34.813 CC lib/nvme/nvme_poll_group.o 00:02:34.813 CC lib/nvme/nvme_zns.o 00:02:34.813 CC lib/nvme/nvme_stubs.o 00:02:35.072 CC lib/nvme/nvme_auth.o 00:02:35.072 CC lib/nvme/nvme_cuse.o 00:02:35.072 CC lib/nvme/nvme_rdma.o 00:02:35.331 CC lib/accel/accel.o 00:02:35.331 CC lib/blob/blobstore.o 00:02:35.331 CC lib/blob/request.o 00:02:35.331 CC lib/blob/zeroes.o 00:02:35.331 CC lib/init/json_config.o 00:02:35.590 CC lib/blob/blob_bs_dev.o 00:02:35.590 CC lib/init/subsystem.o 00:02:35.590 CC lib/init/subsystem_rpc.o 00:02:35.849 CC lib/init/rpc.o 00:02:35.849 CC lib/accel/accel_rpc.o 00:02:35.849 CC lib/accel/accel_sw.o 00:02:35.849 LIB libspdk_init.a 00:02:35.849 CC lib/fsdev/fsdev.o 00:02:35.849 CC lib/virtio/virtio.o 00:02:35.849 SO libspdk_init.so.6.0 00:02:35.849 CC lib/virtio/virtio_vhost_user.o 00:02:36.107 CC lib/virtio/virtio_vfio_user.o 00:02:36.107 SYMLINK libspdk_init.so 00:02:36.107 CC lib/fsdev/fsdev_io.o 00:02:36.107 CC lib/fsdev/fsdev_rpc.o 00:02:36.107 CC lib/virtio/virtio_pci.o 00:02:36.366 CC lib/event/reactor.o 00:02:36.366 CC lib/event/app.o 00:02:36.366 CC lib/event/app_rpc.o 00:02:36.366 CC lib/event/log_rpc.o 00:02:36.366 LIB libspdk_accel.a 00:02:36.366 CC lib/event/scheduler_static.o 00:02:36.366 SO libspdk_accel.so.16.0 00:02:36.366 LIB libspdk_virtio.a 00:02:36.366 LIB libspdk_nvme.a 00:02:36.366 SO libspdk_virtio.so.7.0 00:02:36.366 SYMLINK libspdk_accel.so 00:02:36.625 SYMLINK libspdk_virtio.so 00:02:36.625 LIB libspdk_fsdev.a 00:02:36.625 SO libspdk_nvme.so.15.0 00:02:36.625 SO libspdk_fsdev.so.2.0 00:02:36.625 SYMLINK libspdk_fsdev.so 00:02:36.625 CC lib/bdev/bdev_rpc.o 00:02:36.625 CC lib/bdev/bdev_zone.o 00:02:36.625 CC lib/bdev/bdev.o 00:02:36.625 CC lib/bdev/scsi_nvme.o 00:02:36.625 CC lib/bdev/part.o 00:02:36.884 LIB libspdk_event.a 00:02:36.884 SO libspdk_event.so.14.0 00:02:36.884 SYMLINK libspdk_nvme.so 00:02:36.884 SYMLINK libspdk_event.so 00:02:36.884 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:37.452 LIB libspdk_fuse_dispatcher.a 00:02:37.710 SO libspdk_fuse_dispatcher.so.1.0 00:02:37.710 SYMLINK libspdk_fuse_dispatcher.so 00:02:38.650 LIB libspdk_blob.a 00:02:38.909 SO libspdk_blob.so.12.0 00:02:38.909 SYMLINK libspdk_blob.so 00:02:39.524 CC lib/blobfs/blobfs.o 00:02:39.524 CC lib/blobfs/tree.o 00:02:39.524 CC lib/lvol/lvol.o 00:02:39.524 LIB libspdk_bdev.a 00:02:39.524 SO libspdk_bdev.so.17.0 00:02:39.524 SYMLINK libspdk_bdev.so 00:02:39.786 CC lib/ftl/ftl_core.o 00:02:39.786 CC lib/ftl/ftl_layout.o 00:02:39.786 CC lib/ftl/ftl_init.o 00:02:39.786 CC lib/ftl/ftl_debug.o 00:02:39.786 CC lib/scsi/dev.o 00:02:39.786 CC lib/nvmf/ctrlr.o 00:02:39.786 CC lib/nbd/nbd.o 00:02:39.786 CC lib/ublk/ublk.o 00:02:40.046 CC lib/ftl/ftl_io.o 00:02:40.046 CC lib/ftl/ftl_sb.o 00:02:40.046 CC lib/scsi/lun.o 00:02:40.046 CC lib/scsi/port.o 00:02:40.046 CC lib/scsi/scsi.o 00:02:40.046 CC lib/scsi/scsi_bdev.o 00:02:40.306 CC lib/ftl/ftl_l2p.o 00:02:40.306 CC lib/nbd/nbd_rpc.o 00:02:40.306 CC lib/scsi/scsi_pr.o 00:02:40.307 LIB libspdk_blobfs.a 00:02:40.307 CC lib/scsi/scsi_rpc.o 00:02:40.307 SO libspdk_blobfs.so.11.0 00:02:40.307 LIB libspdk_lvol.a 00:02:40.307 CC lib/scsi/task.o 00:02:40.307 SO libspdk_lvol.so.11.0 00:02:40.307 SYMLINK libspdk_blobfs.so 00:02:40.307 CC lib/ublk/ublk_rpc.o 00:02:40.307 LIB libspdk_nbd.a 00:02:40.307 SYMLINK libspdk_lvol.so 00:02:40.307 CC lib/nvmf/ctrlr_discovery.o 00:02:40.307 CC lib/ftl/ftl_l2p_flat.o 00:02:40.307 SO libspdk_nbd.so.7.0 00:02:40.307 CC lib/nvmf/ctrlr_bdev.o 00:02:40.567 CC lib/ftl/ftl_nv_cache.o 00:02:40.567 SYMLINK libspdk_nbd.so 00:02:40.567 CC lib/ftl/ftl_band.o 00:02:40.567 CC lib/ftl/ftl_band_ops.o 00:02:40.567 LIB libspdk_ublk.a 00:02:40.567 SO libspdk_ublk.so.3.0 00:02:40.567 CC lib/ftl/ftl_writer.o 00:02:40.567 CC lib/nvmf/subsystem.o 00:02:40.567 SYMLINK libspdk_ublk.so 00:02:40.567 CC lib/nvmf/nvmf.o 00:02:40.567 LIB libspdk_scsi.a 00:02:40.567 SO libspdk_scsi.so.9.0 00:02:40.827 SYMLINK libspdk_scsi.so 00:02:40.827 CC lib/ftl/ftl_rq.o 00:02:40.827 CC lib/ftl/ftl_reloc.o 00:02:40.827 CC lib/ftl/ftl_l2p_cache.o 00:02:40.827 CC lib/nvmf/nvmf_rpc.o 00:02:40.827 CC lib/nvmf/transport.o 00:02:40.827 CC lib/nvmf/tcp.o 00:02:41.087 CC lib/ftl/ftl_p2l.o 00:02:41.087 CC lib/iscsi/conn.o 00:02:41.347 CC lib/iscsi/init_grp.o 00:02:41.347 CC lib/nvmf/stubs.o 00:02:41.347 CC lib/nvmf/mdns_server.o 00:02:41.614 CC lib/ftl/ftl_p2l_log.o 00:02:41.614 CC lib/ftl/mngt/ftl_mngt.o 00:02:41.614 CC lib/nvmf/rdma.o 00:02:41.614 CC lib/vhost/vhost.o 00:02:41.614 CC lib/nvmf/auth.o 00:02:41.873 CC lib/iscsi/iscsi.o 00:02:41.873 CC lib/vhost/vhost_rpc.o 00:02:41.873 CC lib/vhost/vhost_scsi.o 00:02:41.873 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:41.873 CC lib/iscsi/param.o 00:02:41.873 CC lib/iscsi/portal_grp.o 00:02:42.132 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:42.132 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:42.132 CC lib/iscsi/tgt_node.o 00:02:42.132 CC lib/vhost/vhost_blk.o 00:02:42.392 CC lib/vhost/rte_vhost_user.o 00:02:42.392 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:42.392 CC lib/iscsi/iscsi_subsystem.o 00:02:42.392 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:42.652 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:42.652 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:42.652 CC lib/iscsi/iscsi_rpc.o 00:02:42.652 CC lib/iscsi/task.o 00:02:42.652 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:42.652 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:42.912 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:42.912 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:42.912 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:42.912 CC lib/ftl/utils/ftl_conf.o 00:02:42.912 CC lib/ftl/utils/ftl_md.o 00:02:42.912 CC lib/ftl/utils/ftl_mempool.o 00:02:43.172 CC lib/ftl/utils/ftl_property.o 00:02:43.172 CC lib/ftl/utils/ftl_bitmap.o 00:02:43.172 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:43.172 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:43.172 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:43.172 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:43.431 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:43.431 LIB libspdk_vhost.a 00:02:43.431 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:43.431 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:43.431 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:43.431 SO libspdk_vhost.so.8.0 00:02:43.431 LIB libspdk_iscsi.a 00:02:43.431 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:43.431 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:43.431 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:43.431 SYMLINK libspdk_vhost.so 00:02:43.431 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:43.431 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:43.431 CC lib/ftl/base/ftl_base_dev.o 00:02:43.431 SO libspdk_iscsi.so.8.0 00:02:43.690 CC lib/ftl/base/ftl_base_bdev.o 00:02:43.690 CC lib/ftl/ftl_trace.o 00:02:43.690 SYMLINK libspdk_iscsi.so 00:02:43.949 LIB libspdk_ftl.a 00:02:43.949 LIB libspdk_nvmf.a 00:02:43.949 SO libspdk_ftl.so.9.0 00:02:44.208 SO libspdk_nvmf.so.20.0 00:02:44.208 SYMLINK libspdk_ftl.so 00:02:44.468 SYMLINK libspdk_nvmf.so 00:02:44.727 CC module/env_dpdk/env_dpdk_rpc.o 00:02:44.727 CC module/accel/iaa/accel_iaa.o 00:02:44.727 CC module/accel/dsa/accel_dsa.o 00:02:44.727 CC module/sock/posix/posix.o 00:02:44.727 CC module/keyring/file/keyring.o 00:02:44.727 CC module/accel/error/accel_error.o 00:02:44.727 CC module/fsdev/aio/fsdev_aio.o 00:02:44.727 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:44.727 CC module/blob/bdev/blob_bdev.o 00:02:44.727 CC module/accel/ioat/accel_ioat.o 00:02:44.987 LIB libspdk_env_dpdk_rpc.a 00:02:44.987 SO libspdk_env_dpdk_rpc.so.6.0 00:02:44.987 SYMLINK libspdk_env_dpdk_rpc.so 00:02:44.987 CC module/keyring/file/keyring_rpc.o 00:02:44.987 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:44.987 CC module/accel/ioat/accel_ioat_rpc.o 00:02:44.987 CC module/accel/error/accel_error_rpc.o 00:02:44.987 LIB libspdk_scheduler_dynamic.a 00:02:44.987 CC module/accel/iaa/accel_iaa_rpc.o 00:02:44.987 SO libspdk_scheduler_dynamic.so.4.0 00:02:44.987 LIB libspdk_keyring_file.a 00:02:44.987 CC module/accel/dsa/accel_dsa_rpc.o 00:02:45.246 SYMLINK libspdk_scheduler_dynamic.so 00:02:45.246 SO libspdk_keyring_file.so.2.0 00:02:45.246 CC module/fsdev/aio/linux_aio_mgr.o 00:02:45.246 LIB libspdk_blob_bdev.a 00:02:45.246 LIB libspdk_accel_ioat.a 00:02:45.246 SO libspdk_blob_bdev.so.12.0 00:02:45.246 LIB libspdk_accel_error.a 00:02:45.246 LIB libspdk_accel_iaa.a 00:02:45.246 SO libspdk_accel_ioat.so.6.0 00:02:45.246 SYMLINK libspdk_keyring_file.so 00:02:45.246 SO libspdk_accel_error.so.2.0 00:02:45.246 SO libspdk_accel_iaa.so.3.0 00:02:45.246 SYMLINK libspdk_blob_bdev.so 00:02:45.246 SYMLINK libspdk_accel_ioat.so 00:02:45.246 SYMLINK libspdk_accel_iaa.so 00:02:45.246 SYMLINK libspdk_accel_error.so 00:02:45.246 LIB libspdk_accel_dsa.a 00:02:45.246 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:45.246 SO libspdk_accel_dsa.so.5.0 00:02:45.246 CC module/keyring/linux/keyring.o 00:02:45.246 SYMLINK libspdk_accel_dsa.so 00:02:45.506 CC module/scheduler/gscheduler/gscheduler.o 00:02:45.506 LIB libspdk_scheduler_dpdk_governor.a 00:02:45.506 CC module/bdev/gpt/gpt.o 00:02:45.506 CC module/bdev/delay/vbdev_delay.o 00:02:45.506 CC module/bdev/error/vbdev_error.o 00:02:45.506 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:45.506 CC module/blobfs/bdev/blobfs_bdev.o 00:02:45.506 CC module/keyring/linux/keyring_rpc.o 00:02:45.506 CC module/bdev/lvol/vbdev_lvol.o 00:02:45.506 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:45.506 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:45.506 LIB libspdk_fsdev_aio.a 00:02:45.506 LIB libspdk_scheduler_gscheduler.a 00:02:45.506 SO libspdk_scheduler_gscheduler.so.4.0 00:02:45.506 SO libspdk_fsdev_aio.so.1.0 00:02:45.506 LIB libspdk_keyring_linux.a 00:02:45.506 LIB libspdk_sock_posix.a 00:02:45.506 SO libspdk_keyring_linux.so.1.0 00:02:45.506 CC module/bdev/gpt/vbdev_gpt.o 00:02:45.506 SO libspdk_sock_posix.so.6.0 00:02:45.506 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:45.506 SYMLINK libspdk_scheduler_gscheduler.so 00:02:45.766 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:45.766 SYMLINK libspdk_fsdev_aio.so 00:02:45.766 SYMLINK libspdk_keyring_linux.so 00:02:45.766 SYMLINK libspdk_sock_posix.so 00:02:45.766 CC module/bdev/error/vbdev_error_rpc.o 00:02:45.766 LIB libspdk_blobfs_bdev.a 00:02:45.766 CC module/bdev/malloc/bdev_malloc.o 00:02:45.766 SO libspdk_blobfs_bdev.so.6.0 00:02:45.766 CC module/bdev/null/bdev_null.o 00:02:45.766 CC module/bdev/nvme/bdev_nvme.o 00:02:45.766 LIB libspdk_bdev_delay.a 00:02:45.766 SO libspdk_bdev_delay.so.6.0 00:02:45.766 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:45.766 SYMLINK libspdk_blobfs_bdev.so 00:02:45.766 LIB libspdk_bdev_error.a 00:02:45.766 CC module/bdev/nvme/nvme_rpc.o 00:02:46.026 LIB libspdk_bdev_gpt.a 00:02:46.026 SO libspdk_bdev_error.so.6.0 00:02:46.026 SYMLINK libspdk_bdev_delay.so 00:02:46.026 CC module/bdev/nvme/bdev_mdns_client.o 00:02:46.026 SO libspdk_bdev_gpt.so.6.0 00:02:46.026 SYMLINK libspdk_bdev_error.so 00:02:46.026 CC module/bdev/nvme/vbdev_opal.o 00:02:46.026 LIB libspdk_bdev_lvol.a 00:02:46.026 CC module/bdev/passthru/vbdev_passthru.o 00:02:46.026 SYMLINK libspdk_bdev_gpt.so 00:02:46.026 SO libspdk_bdev_lvol.so.6.0 00:02:46.026 CC module/bdev/null/bdev_null_rpc.o 00:02:46.026 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:46.026 SYMLINK libspdk_bdev_lvol.so 00:02:46.026 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:46.286 CC module/bdev/raid/bdev_raid.o 00:02:46.286 CC module/bdev/raid/bdev_raid_rpc.o 00:02:46.286 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:46.286 LIB libspdk_bdev_null.a 00:02:46.286 CC module/bdev/raid/bdev_raid_sb.o 00:02:46.286 SO libspdk_bdev_null.so.6.0 00:02:46.286 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:46.286 CC module/bdev/raid/raid0.o 00:02:46.286 SYMLINK libspdk_bdev_null.so 00:02:46.286 LIB libspdk_bdev_malloc.a 00:02:46.286 CC module/bdev/raid/raid1.o 00:02:46.286 SO libspdk_bdev_malloc.so.6.0 00:02:46.545 LIB libspdk_bdev_passthru.a 00:02:46.545 CC module/bdev/split/vbdev_split.o 00:02:46.545 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:46.545 SO libspdk_bdev_passthru.so.6.0 00:02:46.545 SYMLINK libspdk_bdev_malloc.so 00:02:46.545 CC module/bdev/raid/concat.o 00:02:46.545 SYMLINK libspdk_bdev_passthru.so 00:02:46.545 CC module/bdev/split/vbdev_split_rpc.o 00:02:46.545 CC module/bdev/raid/raid5f.o 00:02:46.545 CC module/bdev/aio/bdev_aio.o 00:02:46.545 CC module/bdev/aio/bdev_aio_rpc.o 00:02:46.545 CC module/bdev/ftl/bdev_ftl.o 00:02:46.805 LIB libspdk_bdev_split.a 00:02:46.805 SO libspdk_bdev_split.so.6.0 00:02:46.805 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:46.805 CC module/bdev/iscsi/bdev_iscsi.o 00:02:46.805 SYMLINK libspdk_bdev_split.so 00:02:46.805 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:46.805 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:46.805 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:47.065 LIB libspdk_bdev_zone_block.a 00:02:47.065 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:47.065 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:47.065 LIB libspdk_bdev_aio.a 00:02:47.065 SO libspdk_bdev_zone_block.so.6.0 00:02:47.065 LIB libspdk_bdev_ftl.a 00:02:47.065 SO libspdk_bdev_aio.so.6.0 00:02:47.065 SYMLINK libspdk_bdev_zone_block.so 00:02:47.065 SO libspdk_bdev_ftl.so.6.0 00:02:47.065 SYMLINK libspdk_bdev_aio.so 00:02:47.065 SYMLINK libspdk_bdev_ftl.so 00:02:47.065 LIB libspdk_bdev_iscsi.a 00:02:47.065 SO libspdk_bdev_iscsi.so.6.0 00:02:47.325 LIB libspdk_bdev_raid.a 00:02:47.325 SYMLINK libspdk_bdev_iscsi.so 00:02:47.325 SO libspdk_bdev_raid.so.6.0 00:02:47.325 LIB libspdk_bdev_virtio.a 00:02:47.325 SYMLINK libspdk_bdev_raid.so 00:02:47.325 SO libspdk_bdev_virtio.so.6.0 00:02:47.585 SYMLINK libspdk_bdev_virtio.so 00:02:48.523 LIB libspdk_bdev_nvme.a 00:02:48.523 SO libspdk_bdev_nvme.so.7.1 00:02:48.783 SYMLINK libspdk_bdev_nvme.so 00:02:49.352 CC module/event/subsystems/sock/sock.o 00:02:49.352 CC module/event/subsystems/keyring/keyring.o 00:02:49.352 CC module/event/subsystems/fsdev/fsdev.o 00:02:49.352 CC module/event/subsystems/scheduler/scheduler.o 00:02:49.352 CC module/event/subsystems/iobuf/iobuf.o 00:02:49.352 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:49.352 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:49.352 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:49.352 CC module/event/subsystems/vmd/vmd.o 00:02:49.352 LIB libspdk_event_fsdev.a 00:02:49.352 LIB libspdk_event_vhost_blk.a 00:02:49.352 LIB libspdk_event_keyring.a 00:02:49.353 LIB libspdk_event_scheduler.a 00:02:49.353 SO libspdk_event_fsdev.so.1.0 00:02:49.353 LIB libspdk_event_sock.a 00:02:49.353 SO libspdk_event_vhost_blk.so.3.0 00:02:49.353 LIB libspdk_event_iobuf.a 00:02:49.353 LIB libspdk_event_vmd.a 00:02:49.353 SO libspdk_event_scheduler.so.4.0 00:02:49.353 SO libspdk_event_keyring.so.1.0 00:02:49.353 SO libspdk_event_sock.so.5.0 00:02:49.353 SO libspdk_event_vmd.so.6.0 00:02:49.353 SO libspdk_event_iobuf.so.3.0 00:02:49.353 SYMLINK libspdk_event_fsdev.so 00:02:49.353 SYMLINK libspdk_event_vhost_blk.so 00:02:49.353 SYMLINK libspdk_event_keyring.so 00:02:49.353 SYMLINK libspdk_event_scheduler.so 00:02:49.353 SYMLINK libspdk_event_sock.so 00:02:49.353 SYMLINK libspdk_event_vmd.so 00:02:49.353 SYMLINK libspdk_event_iobuf.so 00:02:49.923 CC module/event/subsystems/accel/accel.o 00:02:49.923 LIB libspdk_event_accel.a 00:02:50.183 SO libspdk_event_accel.so.6.0 00:02:50.183 SYMLINK libspdk_event_accel.so 00:02:50.443 CC module/event/subsystems/bdev/bdev.o 00:02:50.704 LIB libspdk_event_bdev.a 00:02:50.704 SO libspdk_event_bdev.so.6.0 00:02:50.964 SYMLINK libspdk_event_bdev.so 00:02:51.223 CC module/event/subsystems/ublk/ublk.o 00:02:51.223 CC module/event/subsystems/nbd/nbd.o 00:02:51.223 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:51.223 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:51.223 CC module/event/subsystems/scsi/scsi.o 00:02:51.223 LIB libspdk_event_ublk.a 00:02:51.223 LIB libspdk_event_nbd.a 00:02:51.483 LIB libspdk_event_scsi.a 00:02:51.483 SO libspdk_event_ublk.so.3.0 00:02:51.483 SO libspdk_event_nbd.so.6.0 00:02:51.483 SO libspdk_event_scsi.so.6.0 00:02:51.483 SYMLINK libspdk_event_scsi.so 00:02:51.483 SYMLINK libspdk_event_nbd.so 00:02:51.483 SYMLINK libspdk_event_ublk.so 00:02:51.483 LIB libspdk_event_nvmf.a 00:02:51.483 SO libspdk_event_nvmf.so.6.0 00:02:51.483 SYMLINK libspdk_event_nvmf.so 00:02:51.742 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:51.742 CC module/event/subsystems/iscsi/iscsi.o 00:02:52.002 LIB libspdk_event_vhost_scsi.a 00:02:52.002 SO libspdk_event_vhost_scsi.so.3.0 00:02:52.002 LIB libspdk_event_iscsi.a 00:02:52.002 SO libspdk_event_iscsi.so.6.0 00:02:52.002 SYMLINK libspdk_event_vhost_scsi.so 00:02:52.002 SYMLINK libspdk_event_iscsi.so 00:02:52.263 SO libspdk.so.6.0 00:02:52.263 SYMLINK libspdk.so 00:02:52.523 CC app/trace_record/trace_record.o 00:02:52.524 CC app/spdk_lspci/spdk_lspci.o 00:02:52.524 CXX app/trace/trace.o 00:02:52.524 CC app/nvmf_tgt/nvmf_main.o 00:02:52.524 CC app/iscsi_tgt/iscsi_tgt.o 00:02:52.784 CC test/thread/poller_perf/poller_perf.o 00:02:52.784 CC app/spdk_tgt/spdk_tgt.o 00:02:52.784 CC examples/util/zipf/zipf.o 00:02:52.784 CC test/dma/test_dma/test_dma.o 00:02:52.784 LINK spdk_lspci 00:02:52.784 CC test/app/bdev_svc/bdev_svc.o 00:02:52.784 LINK poller_perf 00:02:52.784 LINK spdk_trace_record 00:02:52.784 LINK nvmf_tgt 00:02:52.784 LINK iscsi_tgt 00:02:52.784 LINK zipf 00:02:52.784 LINK spdk_tgt 00:02:53.045 LINK bdev_svc 00:02:53.045 LINK spdk_trace 00:02:53.045 CC app/spdk_nvme_perf/perf.o 00:02:53.045 TEST_HEADER include/spdk/accel.h 00:02:53.045 TEST_HEADER include/spdk/accel_module.h 00:02:53.045 TEST_HEADER include/spdk/assert.h 00:02:53.045 TEST_HEADER include/spdk/barrier.h 00:02:53.045 TEST_HEADER include/spdk/base64.h 00:02:53.045 TEST_HEADER include/spdk/bdev.h 00:02:53.045 TEST_HEADER include/spdk/bdev_module.h 00:02:53.045 TEST_HEADER include/spdk/bdev_zone.h 00:02:53.045 TEST_HEADER include/spdk/bit_array.h 00:02:53.045 TEST_HEADER include/spdk/bit_pool.h 00:02:53.045 TEST_HEADER include/spdk/blob_bdev.h 00:02:53.045 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:53.045 TEST_HEADER include/spdk/blobfs.h 00:02:53.045 TEST_HEADER include/spdk/blob.h 00:02:53.045 TEST_HEADER include/spdk/conf.h 00:02:53.045 TEST_HEADER include/spdk/config.h 00:02:53.045 TEST_HEADER include/spdk/cpuset.h 00:02:53.045 TEST_HEADER include/spdk/crc16.h 00:02:53.045 TEST_HEADER include/spdk/crc32.h 00:02:53.045 TEST_HEADER include/spdk/crc64.h 00:02:53.045 TEST_HEADER include/spdk/dif.h 00:02:53.045 TEST_HEADER include/spdk/dma.h 00:02:53.045 TEST_HEADER include/spdk/endian.h 00:02:53.045 TEST_HEADER include/spdk/env_dpdk.h 00:02:53.045 TEST_HEADER include/spdk/env.h 00:02:53.045 TEST_HEADER include/spdk/event.h 00:02:53.045 TEST_HEADER include/spdk/fd_group.h 00:02:53.045 TEST_HEADER include/spdk/fd.h 00:02:53.045 TEST_HEADER include/spdk/file.h 00:02:53.045 TEST_HEADER include/spdk/fsdev.h 00:02:53.045 TEST_HEADER include/spdk/fsdev_module.h 00:02:53.045 TEST_HEADER include/spdk/ftl.h 00:02:53.045 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:53.045 TEST_HEADER include/spdk/gpt_spec.h 00:02:53.045 TEST_HEADER include/spdk/hexlify.h 00:02:53.045 TEST_HEADER include/spdk/histogram_data.h 00:02:53.045 TEST_HEADER include/spdk/idxd.h 00:02:53.045 TEST_HEADER include/spdk/idxd_spec.h 00:02:53.045 TEST_HEADER include/spdk/init.h 00:02:53.045 TEST_HEADER include/spdk/ioat.h 00:02:53.045 TEST_HEADER include/spdk/ioat_spec.h 00:02:53.045 TEST_HEADER include/spdk/iscsi_spec.h 00:02:53.045 TEST_HEADER include/spdk/json.h 00:02:53.045 TEST_HEADER include/spdk/jsonrpc.h 00:02:53.045 CC app/spdk_nvme_identify/identify.o 00:02:53.045 TEST_HEADER include/spdk/keyring.h 00:02:53.045 TEST_HEADER include/spdk/keyring_module.h 00:02:53.045 TEST_HEADER include/spdk/likely.h 00:02:53.045 TEST_HEADER include/spdk/log.h 00:02:53.045 TEST_HEADER include/spdk/lvol.h 00:02:53.045 TEST_HEADER include/spdk/md5.h 00:02:53.045 TEST_HEADER include/spdk/memory.h 00:02:53.045 TEST_HEADER include/spdk/mmio.h 00:02:53.045 TEST_HEADER include/spdk/nbd.h 00:02:53.045 TEST_HEADER include/spdk/net.h 00:02:53.045 TEST_HEADER include/spdk/notify.h 00:02:53.045 TEST_HEADER include/spdk/nvme.h 00:02:53.045 TEST_HEADER include/spdk/nvme_intel.h 00:02:53.045 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:53.045 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:53.045 TEST_HEADER include/spdk/nvme_spec.h 00:02:53.045 TEST_HEADER include/spdk/nvme_zns.h 00:02:53.306 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:53.306 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:53.306 TEST_HEADER include/spdk/nvmf.h 00:02:53.306 TEST_HEADER include/spdk/nvmf_spec.h 00:02:53.306 TEST_HEADER include/spdk/nvmf_transport.h 00:02:53.306 TEST_HEADER include/spdk/opal.h 00:02:53.306 TEST_HEADER include/spdk/opal_spec.h 00:02:53.306 TEST_HEADER include/spdk/pci_ids.h 00:02:53.306 CC examples/ioat/perf/perf.o 00:02:53.306 TEST_HEADER include/spdk/pipe.h 00:02:53.306 TEST_HEADER include/spdk/queue.h 00:02:53.306 TEST_HEADER include/spdk/reduce.h 00:02:53.306 TEST_HEADER include/spdk/rpc.h 00:02:53.306 TEST_HEADER include/spdk/scheduler.h 00:02:53.306 TEST_HEADER include/spdk/scsi.h 00:02:53.306 TEST_HEADER include/spdk/scsi_spec.h 00:02:53.306 TEST_HEADER include/spdk/sock.h 00:02:53.306 CC examples/vmd/lsvmd/lsvmd.o 00:02:53.306 TEST_HEADER include/spdk/stdinc.h 00:02:53.306 TEST_HEADER include/spdk/string.h 00:02:53.306 TEST_HEADER include/spdk/thread.h 00:02:53.306 TEST_HEADER include/spdk/trace.h 00:02:53.306 CC test/env/vtophys/vtophys.o 00:02:53.306 TEST_HEADER include/spdk/trace_parser.h 00:02:53.306 TEST_HEADER include/spdk/tree.h 00:02:53.306 TEST_HEADER include/spdk/ublk.h 00:02:53.306 TEST_HEADER include/spdk/util.h 00:02:53.306 TEST_HEADER include/spdk/uuid.h 00:02:53.306 TEST_HEADER include/spdk/version.h 00:02:53.306 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:53.306 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:53.306 TEST_HEADER include/spdk/vhost.h 00:02:53.306 TEST_HEADER include/spdk/vmd.h 00:02:53.306 TEST_HEADER include/spdk/xor.h 00:02:53.306 TEST_HEADER include/spdk/zipf.h 00:02:53.306 CXX test/cpp_headers/accel.o 00:02:53.306 LINK test_dma 00:02:53.306 CC test/env/mem_callbacks/mem_callbacks.o 00:02:53.306 CC test/app/histogram_perf/histogram_perf.o 00:02:53.306 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:53.306 LINK lsvmd 00:02:53.306 LINK vtophys 00:02:53.306 LINK ioat_perf 00:02:53.306 CXX test/cpp_headers/accel_module.o 00:02:53.306 LINK histogram_perf 00:02:53.567 CXX test/cpp_headers/assert.o 00:02:53.567 CC examples/vmd/led/led.o 00:02:53.567 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:53.567 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:53.567 CC examples/ioat/verify/verify.o 00:02:53.567 CC test/app/jsoncat/jsoncat.o 00:02:53.567 LINK led 00:02:53.567 CXX test/cpp_headers/barrier.o 00:02:53.827 LINK nvme_fuzz 00:02:53.827 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:53.827 LINK jsoncat 00:02:53.827 LINK mem_callbacks 00:02:53.827 LINK verify 00:02:53.827 LINK spdk_nvme_perf 00:02:53.827 CXX test/cpp_headers/base64.o 00:02:53.827 CXX test/cpp_headers/bdev.o 00:02:53.827 CXX test/cpp_headers/bdev_module.o 00:02:54.088 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:54.088 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:54.088 CXX test/cpp_headers/bdev_zone.o 00:02:54.088 LINK spdk_nvme_identify 00:02:54.088 CC test/env/memory/memory_ut.o 00:02:54.088 CC test/env/pci/pci_ut.o 00:02:54.088 CC examples/idxd/perf/perf.o 00:02:54.088 LINK env_dpdk_post_init 00:02:54.088 CC examples/thread/thread/thread_ex.o 00:02:54.088 LINK vhost_fuzz 00:02:54.088 LINK interrupt_tgt 00:02:54.088 CXX test/cpp_headers/bit_array.o 00:02:54.347 CC app/spdk_nvme_discover/discovery_aer.o 00:02:54.347 CXX test/cpp_headers/bit_pool.o 00:02:54.347 CC test/rpc_client/rpc_client_test.o 00:02:54.347 LINK thread 00:02:54.347 CC test/event/event_perf/event_perf.o 00:02:54.347 LINK idxd_perf 00:02:54.347 LINK pci_ut 00:02:54.347 CC test/nvme/aer/aer.o 00:02:54.608 CXX test/cpp_headers/blob_bdev.o 00:02:54.608 LINK spdk_nvme_discover 00:02:54.608 LINK event_perf 00:02:54.608 LINK rpc_client_test 00:02:54.608 CC test/nvme/reset/reset.o 00:02:54.608 CXX test/cpp_headers/blobfs_bdev.o 00:02:54.608 CC examples/sock/hello_world/hello_sock.o 00:02:54.609 CC test/event/reactor/reactor.o 00:02:54.869 CC app/spdk_top/spdk_top.o 00:02:54.869 CC test/nvme/sgl/sgl.o 00:02:54.869 LINK aer 00:02:54.869 CXX test/cpp_headers/blobfs.o 00:02:54.869 LINK reset 00:02:54.869 CC test/accel/dif/dif.o 00:02:54.869 LINK reactor 00:02:54.869 LINK hello_sock 00:02:55.129 CXX test/cpp_headers/blob.o 00:02:55.129 LINK sgl 00:02:55.129 CC test/event/reactor_perf/reactor_perf.o 00:02:55.129 LINK memory_ut 00:02:55.129 CC test/blobfs/mkfs/mkfs.o 00:02:55.129 CXX test/cpp_headers/conf.o 00:02:55.129 LINK reactor_perf 00:02:55.129 CC test/lvol/esnap/esnap.o 00:02:55.129 CC test/nvme/e2edp/nvme_dp.o 00:02:55.390 LINK iscsi_fuzz 00:02:55.390 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:55.390 LINK mkfs 00:02:55.390 CXX test/cpp_headers/config.o 00:02:55.390 CXX test/cpp_headers/cpuset.o 00:02:55.390 CC app/vhost/vhost.o 00:02:55.390 CC test/event/app_repeat/app_repeat.o 00:02:55.390 LINK nvme_dp 00:02:55.390 CXX test/cpp_headers/crc16.o 00:02:55.650 CC test/nvme/overhead/overhead.o 00:02:55.650 LINK hello_fsdev 00:02:55.650 CC test/app/stub/stub.o 00:02:55.650 LINK vhost 00:02:55.650 LINK app_repeat 00:02:55.650 CXX test/cpp_headers/crc32.o 00:02:55.650 LINK dif 00:02:55.650 LINK spdk_top 00:02:55.650 CC test/nvme/err_injection/err_injection.o 00:02:55.650 LINK stub 00:02:55.912 CXX test/cpp_headers/crc64.o 00:02:55.912 LINK overhead 00:02:55.912 CC app/spdk_dd/spdk_dd.o 00:02:55.912 CC test/event/scheduler/scheduler.o 00:02:55.912 CC examples/accel/perf/accel_perf.o 00:02:55.912 LINK err_injection 00:02:55.912 CC test/nvme/startup/startup.o 00:02:55.912 CC test/nvme/reserve/reserve.o 00:02:55.912 CXX test/cpp_headers/dif.o 00:02:55.912 CC test/nvme/simple_copy/simple_copy.o 00:02:56.179 LINK startup 00:02:56.179 LINK scheduler 00:02:56.179 CXX test/cpp_headers/dma.o 00:02:56.179 CC test/nvme/connect_stress/connect_stress.o 00:02:56.179 CC app/fio/nvme/fio_plugin.o 00:02:56.179 LINK reserve 00:02:56.179 LINK simple_copy 00:02:56.179 LINK spdk_dd 00:02:56.179 CXX test/cpp_headers/endian.o 00:02:56.179 CXX test/cpp_headers/env_dpdk.o 00:02:56.179 LINK connect_stress 00:02:56.179 CC test/nvme/boot_partition/boot_partition.o 00:02:56.446 CXX test/cpp_headers/env.o 00:02:56.446 CXX test/cpp_headers/event.o 00:02:56.446 LINK accel_perf 00:02:56.446 CXX test/cpp_headers/fd_group.o 00:02:56.446 CC test/bdev/bdevio/bdevio.o 00:02:56.446 LINK boot_partition 00:02:56.446 CC examples/blob/hello_world/hello_blob.o 00:02:56.446 CC examples/nvme/hello_world/hello_world.o 00:02:56.705 CXX test/cpp_headers/fd.o 00:02:56.705 CC examples/nvme/reconnect/reconnect.o 00:02:56.705 CC examples/blob/cli/blobcli.o 00:02:56.705 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.705 CC test/nvme/compliance/nvme_compliance.o 00:02:56.705 LINK spdk_nvme 00:02:56.705 CXX test/cpp_headers/file.o 00:02:56.705 LINK hello_blob 00:02:56.705 LINK hello_world 00:02:56.705 LINK bdevio 00:02:56.964 CXX test/cpp_headers/fsdev.o 00:02:56.964 CC app/fio/bdev/fio_plugin.o 00:02:56.964 CC examples/nvme/arbitration/arbitration.o 00:02:56.964 CXX test/cpp_headers/fsdev_module.o 00:02:56.964 LINK reconnect 00:02:56.964 CC examples/nvme/hotplug/hotplug.o 00:02:56.964 LINK nvme_compliance 00:02:56.964 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.222 LINK blobcli 00:02:57.222 LINK nvme_manage 00:02:57.222 CXX test/cpp_headers/ftl.o 00:02:57.222 LINK cmb_copy 00:02:57.222 LINK hotplug 00:02:57.222 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.222 LINK arbitration 00:02:57.222 CXX test/cpp_headers/fuse_dispatcher.o 00:02:57.480 CC test/nvme/fdp/fdp.o 00:02:57.480 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:57.480 CC examples/bdev/hello_world/hello_bdev.o 00:02:57.480 CXX test/cpp_headers/gpt_spec.o 00:02:57.480 LINK spdk_bdev 00:02:57.480 CXX test/cpp_headers/hexlify.o 00:02:57.480 CC examples/nvme/abort/abort.o 00:02:57.480 CC test/nvme/cuse/cuse.o 00:02:57.480 LINK fused_ordering 00:02:57.480 LINK doorbell_aers 00:02:57.480 CXX test/cpp_headers/histogram_data.o 00:02:57.739 CC examples/bdev/bdevperf/bdevperf.o 00:02:57.739 LINK hello_bdev 00:02:57.739 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.739 CXX test/cpp_headers/idxd.o 00:02:57.739 CXX test/cpp_headers/idxd_spec.o 00:02:57.739 CXX test/cpp_headers/init.o 00:02:57.739 LINK fdp 00:02:57.739 CXX test/cpp_headers/ioat.o 00:02:57.739 LINK abort 00:02:57.998 CXX test/cpp_headers/ioat_spec.o 00:02:57.998 LINK pmr_persistence 00:02:57.998 CXX test/cpp_headers/iscsi_spec.o 00:02:57.998 CXX test/cpp_headers/json.o 00:02:57.998 CXX test/cpp_headers/jsonrpc.o 00:02:57.998 CXX test/cpp_headers/keyring.o 00:02:57.998 CXX test/cpp_headers/keyring_module.o 00:02:57.998 CXX test/cpp_headers/likely.o 00:02:57.998 CXX test/cpp_headers/log.o 00:02:57.998 CXX test/cpp_headers/lvol.o 00:02:57.998 CXX test/cpp_headers/md5.o 00:02:58.257 CXX test/cpp_headers/memory.o 00:02:58.257 CXX test/cpp_headers/mmio.o 00:02:58.257 CXX test/cpp_headers/nbd.o 00:02:58.257 CXX test/cpp_headers/net.o 00:02:58.257 CXX test/cpp_headers/notify.o 00:02:58.257 CXX test/cpp_headers/nvme.o 00:02:58.257 CXX test/cpp_headers/nvme_intel.o 00:02:58.257 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.257 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.257 CXX test/cpp_headers/nvme_spec.o 00:02:58.257 CXX test/cpp_headers/nvme_zns.o 00:02:58.257 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.257 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.515 CXX test/cpp_headers/nvmf.o 00:02:58.515 CXX test/cpp_headers/nvmf_spec.o 00:02:58.515 CXX test/cpp_headers/nvmf_transport.o 00:02:58.515 LINK bdevperf 00:02:58.515 CXX test/cpp_headers/opal.o 00:02:58.515 CXX test/cpp_headers/opal_spec.o 00:02:58.515 CXX test/cpp_headers/pci_ids.o 00:02:58.515 CXX test/cpp_headers/pipe.o 00:02:58.515 CXX test/cpp_headers/queue.o 00:02:58.515 CXX test/cpp_headers/reduce.o 00:02:58.515 CXX test/cpp_headers/rpc.o 00:02:58.774 CXX test/cpp_headers/scheduler.o 00:02:58.774 CXX test/cpp_headers/scsi.o 00:02:58.774 CXX test/cpp_headers/scsi_spec.o 00:02:58.774 CXX test/cpp_headers/sock.o 00:02:58.774 CXX test/cpp_headers/stdinc.o 00:02:58.774 CXX test/cpp_headers/string.o 00:02:58.774 CXX test/cpp_headers/thread.o 00:02:58.774 LINK cuse 00:02:58.774 CXX test/cpp_headers/trace.o 00:02:58.774 CXX test/cpp_headers/trace_parser.o 00:02:58.774 CXX test/cpp_headers/tree.o 00:02:58.774 CXX test/cpp_headers/ublk.o 00:02:58.774 CXX test/cpp_headers/util.o 00:02:58.774 CXX test/cpp_headers/uuid.o 00:02:59.032 CXX test/cpp_headers/version.o 00:02:59.032 CC examples/nvmf/nvmf/nvmf.o 00:02:59.032 CXX test/cpp_headers/vfio_user_pci.o 00:02:59.032 CXX test/cpp_headers/vfio_user_spec.o 00:02:59.032 CXX test/cpp_headers/vhost.o 00:02:59.032 CXX test/cpp_headers/vmd.o 00:02:59.032 CXX test/cpp_headers/xor.o 00:02:59.032 CXX test/cpp_headers/zipf.o 00:02:59.291 LINK nvmf 00:03:01.196 LINK esnap 00:03:01.455 00:03:01.455 real 1m18.054s 00:03:01.455 user 6m57.189s 00:03:01.455 sys 1m33.149s 00:03:01.455 02:18:34 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:01.455 02:18:34 make -- common/autotest_common.sh@10 -- $ set +x 00:03:01.455 ************************************ 00:03:01.455 END TEST make 00:03:01.455 ************************************ 00:03:01.455 02:18:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:01.455 02:18:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:01.455 02:18:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:01.455 02:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.455 02:18:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:01.455 02:18:35 -- pm/common@44 -- $ pid=5481 00:03:01.455 02:18:35 -- pm/common@50 -- $ kill -TERM 5481 00:03:01.455 02:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.455 02:18:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:01.455 02:18:35 -- pm/common@44 -- $ pid=5482 00:03:01.455 02:18:35 -- pm/common@50 -- $ kill -TERM 5482 00:03:01.455 02:18:35 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:01.455 02:18:35 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:01.714 02:18:35 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:01.714 02:18:35 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:01.714 02:18:35 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:01.714 02:18:35 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:01.714 02:18:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:01.714 02:18:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:01.714 02:18:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:01.714 02:18:35 -- scripts/common.sh@336 -- # IFS=.-: 00:03:01.714 02:18:35 -- scripts/common.sh@336 -- # read -ra ver1 00:03:01.714 02:18:35 -- scripts/common.sh@337 -- # IFS=.-: 00:03:01.714 02:18:35 -- scripts/common.sh@337 -- # read -ra ver2 00:03:01.714 02:18:35 -- scripts/common.sh@338 -- # local 'op=<' 00:03:01.714 02:18:35 -- scripts/common.sh@340 -- # ver1_l=2 00:03:01.714 02:18:35 -- scripts/common.sh@341 -- # ver2_l=1 00:03:01.714 02:18:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:01.714 02:18:35 -- scripts/common.sh@344 -- # case "$op" in 00:03:01.714 02:18:35 -- scripts/common.sh@345 -- # : 1 00:03:01.714 02:18:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:01.714 02:18:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:01.714 02:18:35 -- scripts/common.sh@365 -- # decimal 1 00:03:01.714 02:18:35 -- scripts/common.sh@353 -- # local d=1 00:03:01.714 02:18:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:01.714 02:18:35 -- scripts/common.sh@355 -- # echo 1 00:03:01.714 02:18:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:01.714 02:18:35 -- scripts/common.sh@366 -- # decimal 2 00:03:01.714 02:18:35 -- scripts/common.sh@353 -- # local d=2 00:03:01.714 02:18:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:01.714 02:18:35 -- scripts/common.sh@355 -- # echo 2 00:03:01.714 02:18:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:01.714 02:18:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:01.714 02:18:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:01.714 02:18:35 -- scripts/common.sh@368 -- # return 0 00:03:01.714 02:18:35 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:01.714 02:18:35 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:01.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.714 --rc genhtml_branch_coverage=1 00:03:01.714 --rc genhtml_function_coverage=1 00:03:01.714 --rc genhtml_legend=1 00:03:01.714 --rc geninfo_all_blocks=1 00:03:01.714 --rc geninfo_unexecuted_blocks=1 00:03:01.714 00:03:01.714 ' 00:03:01.714 02:18:35 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:01.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.714 --rc genhtml_branch_coverage=1 00:03:01.714 --rc genhtml_function_coverage=1 00:03:01.714 --rc genhtml_legend=1 00:03:01.714 --rc geninfo_all_blocks=1 00:03:01.714 --rc geninfo_unexecuted_blocks=1 00:03:01.714 00:03:01.714 ' 00:03:01.714 02:18:35 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:01.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.714 --rc genhtml_branch_coverage=1 00:03:01.714 --rc genhtml_function_coverage=1 00:03:01.714 --rc genhtml_legend=1 00:03:01.714 --rc geninfo_all_blocks=1 00:03:01.714 --rc geninfo_unexecuted_blocks=1 00:03:01.714 00:03:01.714 ' 00:03:01.714 02:18:35 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:01.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.714 --rc genhtml_branch_coverage=1 00:03:01.714 --rc genhtml_function_coverage=1 00:03:01.714 --rc genhtml_legend=1 00:03:01.714 --rc geninfo_all_blocks=1 00:03:01.714 --rc geninfo_unexecuted_blocks=1 00:03:01.714 00:03:01.714 ' 00:03:01.714 02:18:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:01.714 02:18:35 -- nvmf/common.sh@7 -- # uname -s 00:03:01.714 02:18:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:01.714 02:18:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:01.714 02:18:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:01.714 02:18:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:01.714 02:18:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:01.714 02:18:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:01.714 02:18:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:01.714 02:18:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:01.714 02:18:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:01.714 02:18:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:01.714 02:18:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da1044c6-56df-42b4-a1ba-44edbe26f207 00:03:01.714 02:18:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=da1044c6-56df-42b4-a1ba-44edbe26f207 00:03:01.714 02:18:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:01.714 02:18:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:01.714 02:18:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:01.714 02:18:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:01.714 02:18:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:01.714 02:18:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:01.714 02:18:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:01.714 02:18:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.714 02:18:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.714 02:18:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.714 02:18:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.714 02:18:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.714 02:18:35 -- paths/export.sh@5 -- # export PATH 00:03:01.714 02:18:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.714 02:18:35 -- nvmf/common.sh@51 -- # : 0 00:03:01.714 02:18:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:01.714 02:18:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:01.714 02:18:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:01.714 02:18:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:01.714 02:18:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:01.714 02:18:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:01.714 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:01.714 02:18:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:01.714 02:18:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:01.714 02:18:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:01.714 02:18:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:01.714 02:18:35 -- spdk/autotest.sh@32 -- # uname -s 00:03:01.714 02:18:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:01.714 02:18:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:01.714 02:18:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:01.714 02:18:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:01.714 02:18:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:01.714 02:18:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:01.973 02:18:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:01.973 02:18:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:01.973 02:18:35 -- spdk/autotest.sh@48 -- # udevadm_pid=54401 00:03:01.973 02:18:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:01.973 02:18:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:01.973 02:18:35 -- pm/common@17 -- # local monitor 00:03:01.973 02:18:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.973 02:18:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.973 02:18:35 -- pm/common@25 -- # sleep 1 00:03:01.973 02:18:35 -- pm/common@21 -- # date +%s 00:03:01.973 02:18:35 -- pm/common@21 -- # date +%s 00:03:01.973 02:18:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732760315 00:03:01.973 02:18:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732760315 00:03:01.973 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732760315_collect-cpu-load.pm.log 00:03:01.973 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732760315_collect-vmstat.pm.log 00:03:02.911 02:18:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:02.911 02:18:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:02.911 02:18:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:02.911 02:18:36 -- common/autotest_common.sh@10 -- # set +x 00:03:02.911 02:18:36 -- spdk/autotest.sh@59 -- # create_test_list 00:03:02.911 02:18:36 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:02.911 02:18:36 -- common/autotest_common.sh@10 -- # set +x 00:03:02.911 02:18:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:02.911 02:18:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:02.911 02:18:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:02.911 02:18:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:02.911 02:18:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:02.911 02:18:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:02.911 02:18:36 -- common/autotest_common.sh@1457 -- # uname 00:03:02.911 02:18:36 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:02.911 02:18:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:02.911 02:18:36 -- common/autotest_common.sh@1477 -- # uname 00:03:02.911 02:18:36 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:02.911 02:18:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:02.911 02:18:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:02.911 lcov: LCOV version 1.15 00:03:02.911 02:18:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:17.805 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:17.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:32.753 02:19:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:32.753 02:19:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:32.753 02:19:05 -- common/autotest_common.sh@10 -- # set +x 00:03:32.753 02:19:05 -- spdk/autotest.sh@78 -- # rm -f 00:03:32.753 02:19:05 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:33.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.323 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:33.323 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:33.323 02:19:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:33.323 02:19:06 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:33.323 02:19:06 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:33.323 02:19:06 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:33.323 02:19:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:33.323 02:19:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:33.323 02:19:06 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:33.323 02:19:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:33.323 02:19:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:33.323 02:19:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:33.323 02:19:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:33.323 02:19:06 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:33.323 02:19:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:33.323 02:19:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:33.323 02:19:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:33.323 02:19:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:33.323 02:19:06 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:33.323 02:19:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:33.323 02:19:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:33.323 02:19:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:33.323 02:19:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:33.323 02:19:06 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:33.323 02:19:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:33.323 02:19:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:33.323 02:19:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:33.323 02:19:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:33.323 02:19:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:33.323 02:19:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:33.323 02:19:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:33.323 02:19:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:33.323 No valid GPT data, bailing 00:03:33.323 02:19:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:33.323 02:19:06 -- scripts/common.sh@394 -- # pt= 00:03:33.323 02:19:06 -- scripts/common.sh@395 -- # return 1 00:03:33.323 02:19:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:33.323 1+0 records in 00:03:33.323 1+0 records out 00:03:33.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059116 s, 177 MB/s 00:03:33.323 02:19:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:33.323 02:19:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:33.323 02:19:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:33.323 02:19:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:33.323 02:19:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:33.323 No valid GPT data, bailing 00:03:33.323 02:19:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:33.584 02:19:07 -- scripts/common.sh@394 -- # pt= 00:03:33.584 02:19:07 -- scripts/common.sh@395 -- # return 1 00:03:33.584 02:19:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:33.584 1+0 records in 00:03:33.584 1+0 records out 00:03:33.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504567 s, 208 MB/s 00:03:33.584 02:19:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:33.584 02:19:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:33.584 02:19:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:33.584 02:19:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:33.584 02:19:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:33.584 No valid GPT data, bailing 00:03:33.584 02:19:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:33.584 02:19:07 -- scripts/common.sh@394 -- # pt= 00:03:33.584 02:19:07 -- scripts/common.sh@395 -- # return 1 00:03:33.584 02:19:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:33.584 1+0 records in 00:03:33.584 1+0 records out 00:03:33.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00370005 s, 283 MB/s 00:03:33.584 02:19:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:33.584 02:19:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:33.584 02:19:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:33.584 02:19:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:33.584 02:19:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:33.584 No valid GPT data, bailing 00:03:33.584 02:19:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:33.584 02:19:07 -- scripts/common.sh@394 -- # pt= 00:03:33.584 02:19:07 -- scripts/common.sh@395 -- # return 1 00:03:33.584 02:19:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:33.584 1+0 records in 00:03:33.584 1+0 records out 00:03:33.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00758979 s, 138 MB/s 00:03:33.584 02:19:07 -- spdk/autotest.sh@105 -- # sync 00:03:33.843 02:19:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:33.843 02:19:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:33.843 02:19:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:36.384 02:19:09 -- spdk/autotest.sh@111 -- # uname -s 00:03:36.384 02:19:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:36.384 02:19:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:36.384 02:19:09 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:37.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:37.325 Hugepages 00:03:37.325 node hugesize free / total 00:03:37.325 node0 1048576kB 0 / 0 00:03:37.325 node0 2048kB 0 / 0 00:03:37.325 00:03:37.325 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:37.325 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:37.325 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:37.585 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:37.585 02:19:11 -- spdk/autotest.sh@117 -- # uname -s 00:03:37.585 02:19:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:37.585 02:19:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:37.585 02:19:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:38.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:38.414 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:38.414 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:38.414 02:19:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:39.794 02:19:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:39.794 02:19:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:39.794 02:19:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:39.794 02:19:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:39.794 02:19:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:39.794 02:19:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:39.794 02:19:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:39.794 02:19:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:39.794 02:19:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:39.794 02:19:13 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:39.794 02:19:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:39.794 02:19:13 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.071 Waiting for block devices as requested 00:03:40.071 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:40.351 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:40.351 02:19:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:40.351 02:19:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:40.351 02:19:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:40.351 02:19:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:40.351 02:19:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:40.351 02:19:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:40.351 02:19:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:40.351 02:19:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:40.351 02:19:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:40.351 02:19:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:40.351 02:19:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:40.351 02:19:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:40.351 02:19:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:40.351 02:19:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:40.351 02:19:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:40.351 02:19:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:40.351 02:19:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:40.351 02:19:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:40.351 02:19:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:40.351 02:19:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:40.351 02:19:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:40.351 02:19:13 -- common/autotest_common.sh@1543 -- # continue 00:03:40.351 02:19:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:40.351 02:19:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:40.351 02:19:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:40.351 02:19:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:40.351 02:19:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:40.351 02:19:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:40.351 02:19:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:40.351 02:19:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:40.351 02:19:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:40.351 02:19:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:40.351 02:19:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:40.351 02:19:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:40.351 02:19:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:40.351 02:19:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:40.351 02:19:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:40.351 02:19:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:40.351 02:19:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:40.351 02:19:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:40.351 02:19:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:40.351 02:19:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:40.351 02:19:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:40.351 02:19:13 -- common/autotest_common.sh@1543 -- # continue 00:03:40.351 02:19:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:40.351 02:19:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.351 02:19:13 -- common/autotest_common.sh@10 -- # set +x 00:03:40.351 02:19:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:40.351 02:19:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.351 02:19:14 -- common/autotest_common.sh@10 -- # set +x 00:03:40.612 02:19:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:41.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.445 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:41.445 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:41.445 02:19:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:41.445 02:19:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:41.445 02:19:15 -- common/autotest_common.sh@10 -- # set +x 00:03:41.445 02:19:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:41.445 02:19:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:41.445 02:19:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:41.445 02:19:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:41.445 02:19:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:41.445 02:19:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:41.445 02:19:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:41.445 02:19:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:41.445 02:19:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:41.445 02:19:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:41.445 02:19:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:41.445 02:19:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:41.445 02:19:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:41.704 02:19:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:41.704 02:19:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:41.704 02:19:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:41.704 02:19:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:41.704 02:19:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:41.704 02:19:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:41.704 02:19:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:41.704 02:19:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:41.704 02:19:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:41.704 02:19:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:41.704 02:19:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:41.704 02:19:15 -- common/autotest_common.sh@1572 -- # return 0 00:03:41.704 02:19:15 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:41.704 02:19:15 -- common/autotest_common.sh@1580 -- # return 0 00:03:41.704 02:19:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:41.704 02:19:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:41.704 02:19:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:41.704 02:19:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:41.704 02:19:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:41.704 02:19:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.704 02:19:15 -- common/autotest_common.sh@10 -- # set +x 00:03:41.704 02:19:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:41.704 02:19:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:41.704 02:19:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.704 02:19:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.705 02:19:15 -- common/autotest_common.sh@10 -- # set +x 00:03:41.705 ************************************ 00:03:41.705 START TEST env 00:03:41.705 ************************************ 00:03:41.705 02:19:15 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:41.705 * Looking for test storage... 00:03:41.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:41.705 02:19:15 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:41.705 02:19:15 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:41.705 02:19:15 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:41.964 02:19:15 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:41.964 02:19:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.964 02:19:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.964 02:19:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.964 02:19:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.964 02:19:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.964 02:19:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.964 02:19:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.964 02:19:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.964 02:19:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.964 02:19:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.964 02:19:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.964 02:19:15 env -- scripts/common.sh@344 -- # case "$op" in 00:03:41.964 02:19:15 env -- scripts/common.sh@345 -- # : 1 00:03:41.964 02:19:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.964 02:19:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.964 02:19:15 env -- scripts/common.sh@365 -- # decimal 1 00:03:41.964 02:19:15 env -- scripts/common.sh@353 -- # local d=1 00:03:41.964 02:19:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.964 02:19:15 env -- scripts/common.sh@355 -- # echo 1 00:03:41.964 02:19:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.964 02:19:15 env -- scripts/common.sh@366 -- # decimal 2 00:03:41.964 02:19:15 env -- scripts/common.sh@353 -- # local d=2 00:03:41.964 02:19:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.964 02:19:15 env -- scripts/common.sh@355 -- # echo 2 00:03:41.964 02:19:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.964 02:19:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.964 02:19:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.964 02:19:15 env -- scripts/common.sh@368 -- # return 0 00:03:41.964 02:19:15 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.964 02:19:15 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:41.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.964 --rc genhtml_branch_coverage=1 00:03:41.964 --rc genhtml_function_coverage=1 00:03:41.964 --rc genhtml_legend=1 00:03:41.964 --rc geninfo_all_blocks=1 00:03:41.964 --rc geninfo_unexecuted_blocks=1 00:03:41.964 00:03:41.964 ' 00:03:41.964 02:19:15 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:41.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.964 --rc genhtml_branch_coverage=1 00:03:41.964 --rc genhtml_function_coverage=1 00:03:41.964 --rc genhtml_legend=1 00:03:41.964 --rc geninfo_all_blocks=1 00:03:41.964 --rc geninfo_unexecuted_blocks=1 00:03:41.964 00:03:41.964 ' 00:03:41.964 02:19:15 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:41.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.964 --rc genhtml_branch_coverage=1 00:03:41.964 --rc genhtml_function_coverage=1 00:03:41.964 --rc genhtml_legend=1 00:03:41.964 --rc geninfo_all_blocks=1 00:03:41.964 --rc geninfo_unexecuted_blocks=1 00:03:41.964 00:03:41.964 ' 00:03:41.964 02:19:15 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:41.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.964 --rc genhtml_branch_coverage=1 00:03:41.964 --rc genhtml_function_coverage=1 00:03:41.964 --rc genhtml_legend=1 00:03:41.964 --rc geninfo_all_blocks=1 00:03:41.964 --rc geninfo_unexecuted_blocks=1 00:03:41.964 00:03:41.964 ' 00:03:41.964 02:19:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:41.964 02:19:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.964 02:19:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.964 02:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.964 ************************************ 00:03:41.964 START TEST env_memory 00:03:41.964 ************************************ 00:03:41.964 02:19:15 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:41.964 00:03:41.964 00:03:41.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.964 http://cunit.sourceforge.net/ 00:03:41.964 00:03:41.964 00:03:41.964 Suite: memory 00:03:41.964 Test: alloc and free memory map ...[2024-11-28 02:19:15.536723] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:41.964 passed 00:03:41.964 Test: mem map translation ...[2024-11-28 02:19:15.580832] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:41.964 [2024-11-28 02:19:15.580888] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:41.965 [2024-11-28 02:19:15.580987] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:41.965 [2024-11-28 02:19:15.581030] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:42.225 passed 00:03:42.225 Test: mem map registration ...[2024-11-28 02:19:15.652492] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:42.225 [2024-11-28 02:19:15.652560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:42.225 passed 00:03:42.225 Test: mem map adjacent registrations ...passed 00:03:42.225 00:03:42.225 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.225 suites 1 1 n/a 0 0 00:03:42.225 tests 4 4 4 0 0 00:03:42.225 asserts 152 152 152 0 n/a 00:03:42.225 00:03:42.225 Elapsed time = 0.243 seconds 00:03:42.225 00:03:42.225 real 0m0.295s 00:03:42.225 user 0m0.261s 00:03:42.225 sys 0m0.023s 00:03:42.225 02:19:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.225 02:19:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:42.225 ************************************ 00:03:42.225 END TEST env_memory 00:03:42.225 ************************************ 00:03:42.225 02:19:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:42.225 02:19:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.225 02:19:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.225 02:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.225 ************************************ 00:03:42.225 START TEST env_vtophys 00:03:42.225 ************************************ 00:03:42.225 02:19:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:42.225 EAL: lib.eal log level changed from notice to debug 00:03:42.225 EAL: Detected lcore 0 as core 0 on socket 0 00:03:42.225 EAL: Detected lcore 1 as core 0 on socket 0 00:03:42.225 EAL: Detected lcore 2 as core 0 on socket 0 00:03:42.225 EAL: Detected lcore 3 as core 0 on socket 0 00:03:42.225 EAL: Detected lcore 4 as core 0 on socket 0 00:03:42.225 EAL: Detected lcore 5 as core 0 on socket 0 00:03:42.225 EAL: Detected lcore 6 as core 0 on socket 0 00:03:42.225 EAL: Detected lcore 7 as core 0 on socket 0 00:03:42.225 EAL: Detected lcore 8 as core 0 on socket 0 00:03:42.225 EAL: Detected lcore 9 as core 0 on socket 0 00:03:42.225 EAL: Maximum logical cores by configuration: 128 00:03:42.225 EAL: Detected CPU lcores: 10 00:03:42.225 EAL: Detected NUMA nodes: 1 00:03:42.225 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:42.225 EAL: Detected shared linkage of DPDK 00:03:42.485 EAL: No shared files mode enabled, IPC will be disabled 00:03:42.485 EAL: Selected IOVA mode 'PA' 00:03:42.485 EAL: Probing VFIO support... 00:03:42.485 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:42.485 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:42.485 EAL: Ask a virtual area of 0x2e000 bytes 00:03:42.485 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:42.485 EAL: Setting up physically contiguous memory... 00:03:42.485 EAL: Setting maximum number of open files to 524288 00:03:42.485 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:42.485 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:42.485 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.485 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:42.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.485 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.485 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:42.485 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:42.485 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.485 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:42.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.485 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.485 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:42.485 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:42.485 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.485 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:42.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.485 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.485 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:42.485 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:42.485 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.485 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:42.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.485 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.485 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:42.485 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:42.485 EAL: Hugepages will be freed exactly as allocated. 00:03:42.485 EAL: No shared files mode enabled, IPC is disabled 00:03:42.485 EAL: No shared files mode enabled, IPC is disabled 00:03:42.485 EAL: TSC frequency is ~2290000 KHz 00:03:42.485 EAL: Main lcore 0 is ready (tid=7fd7818c8a40;cpuset=[0]) 00:03:42.485 EAL: Trying to obtain current memory policy. 00:03:42.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.485 EAL: Restoring previous memory policy: 0 00:03:42.485 EAL: request: mp_malloc_sync 00:03:42.485 EAL: No shared files mode enabled, IPC is disabled 00:03:42.485 EAL: Heap on socket 0 was expanded by 2MB 00:03:42.485 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:42.485 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:42.485 EAL: Mem event callback 'spdk:(nil)' registered 00:03:42.485 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:42.485 00:03:42.485 00:03:42.485 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.485 http://cunit.sourceforge.net/ 00:03:42.485 00:03:42.485 00:03:42.485 Suite: components_suite 00:03:42.744 Test: vtophys_malloc_test ...passed 00:03:42.744 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:42.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.744 EAL: Restoring previous memory policy: 4 00:03:42.744 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.744 EAL: request: mp_malloc_sync 00:03:42.744 EAL: No shared files mode enabled, IPC is disabled 00:03:42.744 EAL: Heap on socket 0 was expanded by 4MB 00:03:42.744 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.744 EAL: request: mp_malloc_sync 00:03:42.744 EAL: No shared files mode enabled, IPC is disabled 00:03:42.744 EAL: Heap on socket 0 was shrunk by 4MB 00:03:42.744 EAL: Trying to obtain current memory policy. 00:03:42.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.744 EAL: Restoring previous memory policy: 4 00:03:42.744 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.744 EAL: request: mp_malloc_sync 00:03:42.744 EAL: No shared files mode enabled, IPC is disabled 00:03:42.744 EAL: Heap on socket 0 was expanded by 6MB 00:03:42.744 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.744 EAL: request: mp_malloc_sync 00:03:42.744 EAL: No shared files mode enabled, IPC is disabled 00:03:42.744 EAL: Heap on socket 0 was shrunk by 6MB 00:03:43.004 EAL: Trying to obtain current memory policy. 00:03:43.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.004 EAL: Restoring previous memory policy: 4 00:03:43.004 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.004 EAL: request: mp_malloc_sync 00:03:43.004 EAL: No shared files mode enabled, IPC is disabled 00:03:43.004 EAL: Heap on socket 0 was expanded by 10MB 00:03:43.004 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.004 EAL: request: mp_malloc_sync 00:03:43.004 EAL: No shared files mode enabled, IPC is disabled 00:03:43.004 EAL: Heap on socket 0 was shrunk by 10MB 00:03:43.004 EAL: Trying to obtain current memory policy. 00:03:43.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.004 EAL: Restoring previous memory policy: 4 00:03:43.004 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.004 EAL: request: mp_malloc_sync 00:03:43.004 EAL: No shared files mode enabled, IPC is disabled 00:03:43.004 EAL: Heap on socket 0 was expanded by 18MB 00:03:43.004 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.004 EAL: request: mp_malloc_sync 00:03:43.004 EAL: No shared files mode enabled, IPC is disabled 00:03:43.004 EAL: Heap on socket 0 was shrunk by 18MB 00:03:43.004 EAL: Trying to obtain current memory policy. 00:03:43.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.004 EAL: Restoring previous memory policy: 4 00:03:43.004 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.004 EAL: request: mp_malloc_sync 00:03:43.004 EAL: No shared files mode enabled, IPC is disabled 00:03:43.004 EAL: Heap on socket 0 was expanded by 34MB 00:03:43.004 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.004 EAL: request: mp_malloc_sync 00:03:43.004 EAL: No shared files mode enabled, IPC is disabled 00:03:43.004 EAL: Heap on socket 0 was shrunk by 34MB 00:03:43.004 EAL: Trying to obtain current memory policy. 00:03:43.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.004 EAL: Restoring previous memory policy: 4 00:03:43.004 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.004 EAL: request: mp_malloc_sync 00:03:43.004 EAL: No shared files mode enabled, IPC is disabled 00:03:43.004 EAL: Heap on socket 0 was expanded by 66MB 00:03:43.264 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.264 EAL: request: mp_malloc_sync 00:03:43.264 EAL: No shared files mode enabled, IPC is disabled 00:03:43.264 EAL: Heap on socket 0 was shrunk by 66MB 00:03:43.264 EAL: Trying to obtain current memory policy. 00:03:43.264 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.264 EAL: Restoring previous memory policy: 4 00:03:43.264 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.264 EAL: request: mp_malloc_sync 00:03:43.264 EAL: No shared files mode enabled, IPC is disabled 00:03:43.265 EAL: Heap on socket 0 was expanded by 130MB 00:03:43.525 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.525 EAL: request: mp_malloc_sync 00:03:43.525 EAL: No shared files mode enabled, IPC is disabled 00:03:43.525 EAL: Heap on socket 0 was shrunk by 130MB 00:03:43.784 EAL: Trying to obtain current memory policy. 00:03:43.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.784 EAL: Restoring previous memory policy: 4 00:03:43.784 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.784 EAL: request: mp_malloc_sync 00:03:43.784 EAL: No shared files mode enabled, IPC is disabled 00:03:43.784 EAL: Heap on socket 0 was expanded by 258MB 00:03:44.354 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.354 EAL: request: mp_malloc_sync 00:03:44.354 EAL: No shared files mode enabled, IPC is disabled 00:03:44.354 EAL: Heap on socket 0 was shrunk by 258MB 00:03:44.922 EAL: Trying to obtain current memory policy. 00:03:44.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.922 EAL: Restoring previous memory policy: 4 00:03:44.922 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.922 EAL: request: mp_malloc_sync 00:03:44.922 EAL: No shared files mode enabled, IPC is disabled 00:03:44.922 EAL: Heap on socket 0 was expanded by 514MB 00:03:45.860 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.860 EAL: request: mp_malloc_sync 00:03:45.860 EAL: No shared files mode enabled, IPC is disabled 00:03:45.860 EAL: Heap on socket 0 was shrunk by 514MB 00:03:46.801 EAL: Trying to obtain current memory policy. 00:03:46.801 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.801 EAL: Restoring previous memory policy: 4 00:03:46.801 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.801 EAL: request: mp_malloc_sync 00:03:46.801 EAL: No shared files mode enabled, IPC is disabled 00:03:46.801 EAL: Heap on socket 0 was expanded by 1026MB 00:03:48.710 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.710 EAL: request: mp_malloc_sync 00:03:48.710 EAL: No shared files mode enabled, IPC is disabled 00:03:48.710 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:50.620 passed 00:03:50.620 00:03:50.620 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.620 suites 1 1 n/a 0 0 00:03:50.620 tests 2 2 2 0 0 00:03:50.620 asserts 5726 5726 5726 0 n/a 00:03:50.620 00:03:50.620 Elapsed time = 7.839 seconds 00:03:50.620 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.620 EAL: request: mp_malloc_sync 00:03:50.620 EAL: No shared files mode enabled, IPC is disabled 00:03:50.620 EAL: Heap on socket 0 was shrunk by 2MB 00:03:50.620 EAL: No shared files mode enabled, IPC is disabled 00:03:50.620 EAL: No shared files mode enabled, IPC is disabled 00:03:50.620 EAL: No shared files mode enabled, IPC is disabled 00:03:50.620 00:03:50.620 real 0m8.164s 00:03:50.620 user 0m7.222s 00:03:50.620 sys 0m0.783s 00:03:50.620 02:19:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.620 02:19:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:50.620 ************************************ 00:03:50.620 END TEST env_vtophys 00:03:50.620 ************************************ 00:03:50.620 02:19:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:50.620 02:19:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.620 02:19:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.620 02:19:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.620 ************************************ 00:03:50.620 START TEST env_pci 00:03:50.620 ************************************ 00:03:50.620 02:19:24 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:50.620 00:03:50.620 00:03:50.620 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.620 http://cunit.sourceforge.net/ 00:03:50.620 00:03:50.620 00:03:50.620 Suite: pci 00:03:50.620 Test: pci_hook ...[2024-11-28 02:19:24.102381] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56700 has claimed it 00:03:50.620 passed 00:03:50.620 00:03:50.620 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.620 suites 1 1 n/a 0 0 00:03:50.620 tests 1 1 1 0 0 00:03:50.620 asserts 25 25 25 0 n/a 00:03:50.620 00:03:50.620 Elapsed time = 0.007 seconds 00:03:50.620 EAL: Cannot find device (10000:00:01.0) 00:03:50.620 EAL: Failed to attach device on primary process 00:03:50.620 00:03:50.620 real 0m0.107s 00:03:50.620 user 0m0.047s 00:03:50.620 sys 0m0.059s 00:03:50.620 02:19:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.620 02:19:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:50.620 ************************************ 00:03:50.620 END TEST env_pci 00:03:50.620 ************************************ 00:03:50.620 02:19:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:50.620 02:19:24 env -- env/env.sh@15 -- # uname 00:03:50.620 02:19:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:50.620 02:19:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:50.620 02:19:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:50.620 02:19:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:50.620 02:19:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.620 02:19:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.620 ************************************ 00:03:50.620 START TEST env_dpdk_post_init 00:03:50.620 ************************************ 00:03:50.620 02:19:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:50.879 EAL: Detected CPU lcores: 10 00:03:50.879 EAL: Detected NUMA nodes: 1 00:03:50.879 EAL: Detected shared linkage of DPDK 00:03:50.879 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:50.879 EAL: Selected IOVA mode 'PA' 00:03:50.879 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:50.879 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:50.879 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:50.879 Starting DPDK initialization... 00:03:50.879 Starting SPDK post initialization... 00:03:50.879 SPDK NVMe probe 00:03:50.879 Attaching to 0000:00:10.0 00:03:50.879 Attaching to 0000:00:11.0 00:03:50.879 Attached to 0000:00:10.0 00:03:50.879 Attached to 0000:00:11.0 00:03:50.879 Cleaning up... 00:03:50.879 00:03:50.879 real 0m0.279s 00:03:50.879 user 0m0.092s 00:03:50.879 sys 0m0.087s 00:03:50.879 02:19:24 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.879 02:19:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:50.879 ************************************ 00:03:50.879 END TEST env_dpdk_post_init 00:03:50.879 ************************************ 00:03:51.138 02:19:24 env -- env/env.sh@26 -- # uname 00:03:51.138 02:19:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:51.138 02:19:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:51.138 02:19:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.138 02:19:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.138 02:19:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.138 ************************************ 00:03:51.138 START TEST env_mem_callbacks 00:03:51.138 ************************************ 00:03:51.138 02:19:24 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:51.138 EAL: Detected CPU lcores: 10 00:03:51.138 EAL: Detected NUMA nodes: 1 00:03:51.138 EAL: Detected shared linkage of DPDK 00:03:51.138 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:51.138 EAL: Selected IOVA mode 'PA' 00:03:51.138 00:03:51.138 00:03:51.138 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.138 http://cunit.sourceforge.net/ 00:03:51.138 00:03:51.138 00:03:51.138 Suite: memory 00:03:51.138 Test: test ... 00:03:51.138 register 0x200000200000 2097152 00:03:51.138 malloc 3145728 00:03:51.138 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:51.138 register 0x200000400000 4194304 00:03:51.138 buf 0x2000004fffc0 len 3145728 PASSED 00:03:51.138 malloc 64 00:03:51.138 buf 0x2000004ffec0 len 64 PASSED 00:03:51.138 malloc 4194304 00:03:51.138 register 0x200000800000 6291456 00:03:51.138 buf 0x2000009fffc0 len 4194304 PASSED 00:03:51.138 free 0x2000004fffc0 3145728 00:03:51.138 free 0x2000004ffec0 64 00:03:51.138 unregister 0x200000400000 4194304 PASSED 00:03:51.138 free 0x2000009fffc0 4194304 00:03:51.138 unregister 0x200000800000 6291456 PASSED 00:03:51.138 malloc 8388608 00:03:51.138 register 0x200000400000 10485760 00:03:51.138 buf 0x2000005fffc0 len 8388608 PASSED 00:03:51.138 free 0x2000005fffc0 8388608 00:03:51.397 unregister 0x200000400000 10485760 PASSED 00:03:51.397 passed 00:03:51.397 00:03:51.397 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.397 suites 1 1 n/a 0 0 00:03:51.397 tests 1 1 1 0 0 00:03:51.397 asserts 15 15 15 0 n/a 00:03:51.397 00:03:51.397 Elapsed time = 0.071 seconds 00:03:51.397 00:03:51.397 real 0m0.264s 00:03:51.397 user 0m0.097s 00:03:51.397 sys 0m0.065s 00:03:51.397 02:19:24 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.397 02:19:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:51.397 ************************************ 00:03:51.397 END TEST env_mem_callbacks 00:03:51.397 ************************************ 00:03:51.397 ************************************ 00:03:51.397 END TEST env 00:03:51.397 00:03:51.397 real 0m9.686s 00:03:51.397 user 0m7.944s 00:03:51.397 sys 0m1.385s 00:03:51.397 02:19:24 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.397 02:19:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.397 ************************************ 00:03:51.397 02:19:24 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:51.397 02:19:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.397 02:19:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.397 02:19:24 -- common/autotest_common.sh@10 -- # set +x 00:03:51.397 ************************************ 00:03:51.397 START TEST rpc 00:03:51.397 ************************************ 00:03:51.397 02:19:24 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:51.698 * Looking for test storage... 00:03:51.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:51.698 02:19:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.698 02:19:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.698 02:19:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.698 02:19:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.698 02:19:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.698 02:19:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.698 02:19:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.698 02:19:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.698 02:19:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.698 02:19:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.698 02:19:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.698 02:19:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:51.698 02:19:25 rpc -- scripts/common.sh@345 -- # : 1 00:03:51.698 02:19:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.698 02:19:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.698 02:19:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:51.698 02:19:25 rpc -- scripts/common.sh@353 -- # local d=1 00:03:51.698 02:19:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.698 02:19:25 rpc -- scripts/common.sh@355 -- # echo 1 00:03:51.698 02:19:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.698 02:19:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:51.698 02:19:25 rpc -- scripts/common.sh@353 -- # local d=2 00:03:51.698 02:19:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.698 02:19:25 rpc -- scripts/common.sh@355 -- # echo 2 00:03:51.698 02:19:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.698 02:19:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.698 02:19:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.698 02:19:25 rpc -- scripts/common.sh@368 -- # return 0 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:51.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.698 --rc genhtml_branch_coverage=1 00:03:51.698 --rc genhtml_function_coverage=1 00:03:51.698 --rc genhtml_legend=1 00:03:51.698 --rc geninfo_all_blocks=1 00:03:51.698 --rc geninfo_unexecuted_blocks=1 00:03:51.698 00:03:51.698 ' 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:51.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.698 --rc genhtml_branch_coverage=1 00:03:51.698 --rc genhtml_function_coverage=1 00:03:51.698 --rc genhtml_legend=1 00:03:51.698 --rc geninfo_all_blocks=1 00:03:51.698 --rc geninfo_unexecuted_blocks=1 00:03:51.698 00:03:51.698 ' 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:51.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.698 --rc genhtml_branch_coverage=1 00:03:51.698 --rc genhtml_function_coverage=1 00:03:51.698 --rc genhtml_legend=1 00:03:51.698 --rc geninfo_all_blocks=1 00:03:51.698 --rc geninfo_unexecuted_blocks=1 00:03:51.698 00:03:51.698 ' 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:51.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.698 --rc genhtml_branch_coverage=1 00:03:51.698 --rc genhtml_function_coverage=1 00:03:51.698 --rc genhtml_legend=1 00:03:51.698 --rc geninfo_all_blocks=1 00:03:51.698 --rc geninfo_unexecuted_blocks=1 00:03:51.698 00:03:51.698 ' 00:03:51.698 02:19:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56827 00:03:51.698 02:19:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:51.698 02:19:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.698 02:19:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56827 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 56827 ']' 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.698 02:19:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.698 [2024-11-28 02:19:25.302797] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:03:51.698 [2024-11-28 02:19:25.302960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56827 ] 00:03:51.985 [2024-11-28 02:19:25.465649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.985 [2024-11-28 02:19:25.580387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:51.985 [2024-11-28 02:19:25.580469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56827' to capture a snapshot of events at runtime. 00:03:51.985 [2024-11-28 02:19:25.580479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:51.985 [2024-11-28 02:19:25.580488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:51.985 [2024-11-28 02:19:25.580495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56827 for offline analysis/debug. 00:03:51.985 [2024-11-28 02:19:25.581687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.924 02:19:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:52.924 02:19:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:52.924 02:19:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:52.924 02:19:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:52.924 02:19:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:52.924 02:19:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:52.924 02:19:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.924 02:19:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.924 02:19:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.924 ************************************ 00:03:52.924 START TEST rpc_integrity 00:03:52.924 ************************************ 00:03:52.924 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:52.924 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:52.924 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.924 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.924 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.924 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:52.924 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:52.924 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:52.924 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:52.924 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.924 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.924 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.924 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:52.924 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:52.924 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.924 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.924 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.924 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:52.924 { 00:03:52.924 "name": "Malloc0", 00:03:52.924 "aliases": [ 00:03:52.924 "6260f167-5814-4e8a-a149-03a2a4155a71" 00:03:52.924 ], 00:03:52.924 "product_name": "Malloc disk", 00:03:52.924 "block_size": 512, 00:03:52.924 "num_blocks": 16384, 00:03:52.924 "uuid": "6260f167-5814-4e8a-a149-03a2a4155a71", 00:03:52.924 "assigned_rate_limits": { 00:03:52.924 "rw_ios_per_sec": 0, 00:03:52.924 "rw_mbytes_per_sec": 0, 00:03:52.924 "r_mbytes_per_sec": 0, 00:03:52.924 "w_mbytes_per_sec": 0 00:03:52.924 }, 00:03:52.924 "claimed": false, 00:03:52.924 "zoned": false, 00:03:52.924 "supported_io_types": { 00:03:52.924 "read": true, 00:03:52.924 "write": true, 00:03:52.924 "unmap": true, 00:03:52.924 "flush": true, 00:03:52.924 "reset": true, 00:03:52.924 "nvme_admin": false, 00:03:52.924 "nvme_io": false, 00:03:52.924 "nvme_io_md": false, 00:03:52.924 "write_zeroes": true, 00:03:52.924 "zcopy": true, 00:03:52.924 "get_zone_info": false, 00:03:52.924 "zone_management": false, 00:03:52.924 "zone_append": false, 00:03:52.924 "compare": false, 00:03:52.924 "compare_and_write": false, 00:03:52.924 "abort": true, 00:03:52.924 "seek_hole": false, 00:03:52.924 "seek_data": false, 00:03:52.924 "copy": true, 00:03:52.924 "nvme_iov_md": false 00:03:52.924 }, 00:03:52.924 "memory_domains": [ 00:03:52.924 { 00:03:52.924 "dma_device_id": "system", 00:03:52.924 "dma_device_type": 1 00:03:52.924 }, 00:03:52.924 { 00:03:52.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.924 "dma_device_type": 2 00:03:52.924 } 00:03:52.924 ], 00:03:52.924 "driver_specific": {} 00:03:52.924 } 00:03:52.924 ]' 00:03:52.924 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:53.184 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:53.184 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:53.184 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.184 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.184 [2024-11-28 02:19:26.620480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:53.184 [2024-11-28 02:19:26.620575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:53.184 [2024-11-28 02:19:26.620606] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:03:53.184 [2024-11-28 02:19:26.620624] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:53.184 [2024-11-28 02:19:26.623280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:53.184 [2024-11-28 02:19:26.623329] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:53.184 Passthru0 00:03:53.184 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.184 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:53.184 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.184 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.184 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.185 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:53.185 { 00:03:53.185 "name": "Malloc0", 00:03:53.185 "aliases": [ 00:03:53.185 "6260f167-5814-4e8a-a149-03a2a4155a71" 00:03:53.185 ], 00:03:53.185 "product_name": "Malloc disk", 00:03:53.185 "block_size": 512, 00:03:53.185 "num_blocks": 16384, 00:03:53.185 "uuid": "6260f167-5814-4e8a-a149-03a2a4155a71", 00:03:53.185 "assigned_rate_limits": { 00:03:53.185 "rw_ios_per_sec": 0, 00:03:53.185 "rw_mbytes_per_sec": 0, 00:03:53.185 "r_mbytes_per_sec": 0, 00:03:53.185 "w_mbytes_per_sec": 0 00:03:53.185 }, 00:03:53.185 "claimed": true, 00:03:53.185 "claim_type": "exclusive_write", 00:03:53.185 "zoned": false, 00:03:53.185 "supported_io_types": { 00:03:53.185 "read": true, 00:03:53.185 "write": true, 00:03:53.185 "unmap": true, 00:03:53.185 "flush": true, 00:03:53.185 "reset": true, 00:03:53.185 "nvme_admin": false, 00:03:53.185 "nvme_io": false, 00:03:53.185 "nvme_io_md": false, 00:03:53.185 "write_zeroes": true, 00:03:53.185 "zcopy": true, 00:03:53.185 "get_zone_info": false, 00:03:53.185 "zone_management": false, 00:03:53.185 "zone_append": false, 00:03:53.185 "compare": false, 00:03:53.185 "compare_and_write": false, 00:03:53.185 "abort": true, 00:03:53.185 "seek_hole": false, 00:03:53.185 "seek_data": false, 00:03:53.185 "copy": true, 00:03:53.185 "nvme_iov_md": false 00:03:53.185 }, 00:03:53.185 "memory_domains": [ 00:03:53.185 { 00:03:53.185 "dma_device_id": "system", 00:03:53.185 "dma_device_type": 1 00:03:53.185 }, 00:03:53.185 { 00:03:53.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.185 "dma_device_type": 2 00:03:53.185 } 00:03:53.185 ], 00:03:53.185 "driver_specific": {} 00:03:53.185 }, 00:03:53.185 { 00:03:53.185 "name": "Passthru0", 00:03:53.185 "aliases": [ 00:03:53.185 "e9c79bc1-bb5a-58d1-a193-085380b28785" 00:03:53.185 ], 00:03:53.185 "product_name": "passthru", 00:03:53.185 "block_size": 512, 00:03:53.185 "num_blocks": 16384, 00:03:53.185 "uuid": "e9c79bc1-bb5a-58d1-a193-085380b28785", 00:03:53.185 "assigned_rate_limits": { 00:03:53.185 "rw_ios_per_sec": 0, 00:03:53.185 "rw_mbytes_per_sec": 0, 00:03:53.185 "r_mbytes_per_sec": 0, 00:03:53.185 "w_mbytes_per_sec": 0 00:03:53.185 }, 00:03:53.185 "claimed": false, 00:03:53.185 "zoned": false, 00:03:53.185 "supported_io_types": { 00:03:53.185 "read": true, 00:03:53.185 "write": true, 00:03:53.185 "unmap": true, 00:03:53.185 "flush": true, 00:03:53.185 "reset": true, 00:03:53.185 "nvme_admin": false, 00:03:53.185 "nvme_io": false, 00:03:53.185 "nvme_io_md": false, 00:03:53.185 "write_zeroes": true, 00:03:53.185 "zcopy": true, 00:03:53.185 "get_zone_info": false, 00:03:53.185 "zone_management": false, 00:03:53.185 "zone_append": false, 00:03:53.185 "compare": false, 00:03:53.185 "compare_and_write": false, 00:03:53.185 "abort": true, 00:03:53.185 "seek_hole": false, 00:03:53.185 "seek_data": false, 00:03:53.185 "copy": true, 00:03:53.185 "nvme_iov_md": false 00:03:53.185 }, 00:03:53.185 "memory_domains": [ 00:03:53.185 { 00:03:53.185 "dma_device_id": "system", 00:03:53.185 "dma_device_type": 1 00:03:53.185 }, 00:03:53.185 { 00:03:53.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.185 "dma_device_type": 2 00:03:53.185 } 00:03:53.185 ], 00:03:53.185 "driver_specific": { 00:03:53.185 "passthru": { 00:03:53.185 "name": "Passthru0", 00:03:53.185 "base_bdev_name": "Malloc0" 00:03:53.185 } 00:03:53.185 } 00:03:53.185 } 00:03:53.185 ]' 00:03:53.185 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:53.185 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:53.185 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.185 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.185 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.185 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:53.185 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:53.185 02:19:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:53.185 00:03:53.185 real 0m0.358s 00:03:53.185 user 0m0.203s 00:03:53.185 sys 0m0.049s 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.185 02:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.185 ************************************ 00:03:53.185 END TEST rpc_integrity 00:03:53.185 ************************************ 00:03:53.445 02:19:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:53.445 02:19:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.445 02:19:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.445 02:19:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.445 ************************************ 00:03:53.445 START TEST rpc_plugins 00:03:53.445 ************************************ 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:53.445 02:19:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.445 02:19:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:53.445 02:19:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.445 02:19:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:53.445 { 00:03:53.445 "name": "Malloc1", 00:03:53.445 "aliases": [ 00:03:53.445 "3aeab57b-7418-441a-8b10-068939192260" 00:03:53.445 ], 00:03:53.445 "product_name": "Malloc disk", 00:03:53.445 "block_size": 4096, 00:03:53.445 "num_blocks": 256, 00:03:53.445 "uuid": "3aeab57b-7418-441a-8b10-068939192260", 00:03:53.445 "assigned_rate_limits": { 00:03:53.445 "rw_ios_per_sec": 0, 00:03:53.445 "rw_mbytes_per_sec": 0, 00:03:53.445 "r_mbytes_per_sec": 0, 00:03:53.445 "w_mbytes_per_sec": 0 00:03:53.445 }, 00:03:53.445 "claimed": false, 00:03:53.445 "zoned": false, 00:03:53.445 "supported_io_types": { 00:03:53.445 "read": true, 00:03:53.445 "write": true, 00:03:53.445 "unmap": true, 00:03:53.445 "flush": true, 00:03:53.445 "reset": true, 00:03:53.445 "nvme_admin": false, 00:03:53.445 "nvme_io": false, 00:03:53.445 "nvme_io_md": false, 00:03:53.445 "write_zeroes": true, 00:03:53.445 "zcopy": true, 00:03:53.445 "get_zone_info": false, 00:03:53.445 "zone_management": false, 00:03:53.445 "zone_append": false, 00:03:53.445 "compare": false, 00:03:53.445 "compare_and_write": false, 00:03:53.445 "abort": true, 00:03:53.445 "seek_hole": false, 00:03:53.445 "seek_data": false, 00:03:53.445 "copy": true, 00:03:53.445 "nvme_iov_md": false 00:03:53.445 }, 00:03:53.445 "memory_domains": [ 00:03:53.445 { 00:03:53.445 "dma_device_id": "system", 00:03:53.445 "dma_device_type": 1 00:03:53.445 }, 00:03:53.445 { 00:03:53.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.445 "dma_device_type": 2 00:03:53.445 } 00:03:53.445 ], 00:03:53.445 "driver_specific": {} 00:03:53.445 } 00:03:53.445 ]' 00:03:53.445 02:19:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:53.445 02:19:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:53.445 02:19:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.445 02:19:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.445 02:19:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.446 02:19:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:53.446 02:19:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:53.446 02:19:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:53.446 00:03:53.446 real 0m0.180s 00:03:53.446 user 0m0.109s 00:03:53.446 sys 0m0.026s 00:03:53.446 02:19:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.446 02:19:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.446 ************************************ 00:03:53.446 END TEST rpc_plugins 00:03:53.446 ************************************ 00:03:53.446 02:19:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:53.446 02:19:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.446 02:19:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.446 02:19:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.446 ************************************ 00:03:53.446 START TEST rpc_trace_cmd_test 00:03:53.446 ************************************ 00:03:53.446 02:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:53.446 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:53.704 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:53.705 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56827", 00:03:53.705 "tpoint_group_mask": "0x8", 00:03:53.705 "iscsi_conn": { 00:03:53.705 "mask": "0x2", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "scsi": { 00:03:53.705 "mask": "0x4", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "bdev": { 00:03:53.705 "mask": "0x8", 00:03:53.705 "tpoint_mask": "0xffffffffffffffff" 00:03:53.705 }, 00:03:53.705 "nvmf_rdma": { 00:03:53.705 "mask": "0x10", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "nvmf_tcp": { 00:03:53.705 "mask": "0x20", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "ftl": { 00:03:53.705 "mask": "0x40", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "blobfs": { 00:03:53.705 "mask": "0x80", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "dsa": { 00:03:53.705 "mask": "0x200", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "thread": { 00:03:53.705 "mask": "0x400", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "nvme_pcie": { 00:03:53.705 "mask": "0x800", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "iaa": { 00:03:53.705 "mask": "0x1000", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "nvme_tcp": { 00:03:53.705 "mask": "0x2000", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "bdev_nvme": { 00:03:53.705 "mask": "0x4000", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "sock": { 00:03:53.705 "mask": "0x8000", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "blob": { 00:03:53.705 "mask": "0x10000", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "bdev_raid": { 00:03:53.705 "mask": "0x20000", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 }, 00:03:53.705 "scheduler": { 00:03:53.705 "mask": "0x40000", 00:03:53.705 "tpoint_mask": "0x0" 00:03:53.705 } 00:03:53.705 }' 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:53.705 00:03:53.705 real 0m0.253s 00:03:53.705 user 0m0.210s 00:03:53.705 sys 0m0.034s 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.705 02:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:53.705 ************************************ 00:03:53.705 END TEST rpc_trace_cmd_test 00:03:53.705 ************************************ 00:03:53.992 02:19:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:53.992 02:19:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:53.992 02:19:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:53.992 02:19:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.992 02:19:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.992 02:19:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.992 ************************************ 00:03:53.992 START TEST rpc_daemon_integrity 00:03:53.992 ************************************ 00:03:53.992 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:53.992 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:53.993 { 00:03:53.993 "name": "Malloc2", 00:03:53.993 "aliases": [ 00:03:53.993 "49cba9bb-6e71-4307-8f2d-20970799d834" 00:03:53.993 ], 00:03:53.993 "product_name": "Malloc disk", 00:03:53.993 "block_size": 512, 00:03:53.993 "num_blocks": 16384, 00:03:53.993 "uuid": "49cba9bb-6e71-4307-8f2d-20970799d834", 00:03:53.993 "assigned_rate_limits": { 00:03:53.993 "rw_ios_per_sec": 0, 00:03:53.993 "rw_mbytes_per_sec": 0, 00:03:53.993 "r_mbytes_per_sec": 0, 00:03:53.993 "w_mbytes_per_sec": 0 00:03:53.993 }, 00:03:53.993 "claimed": false, 00:03:53.993 "zoned": false, 00:03:53.993 "supported_io_types": { 00:03:53.993 "read": true, 00:03:53.993 "write": true, 00:03:53.993 "unmap": true, 00:03:53.993 "flush": true, 00:03:53.993 "reset": true, 00:03:53.993 "nvme_admin": false, 00:03:53.993 "nvme_io": false, 00:03:53.993 "nvme_io_md": false, 00:03:53.993 "write_zeroes": true, 00:03:53.993 "zcopy": true, 00:03:53.993 "get_zone_info": false, 00:03:53.993 "zone_management": false, 00:03:53.993 "zone_append": false, 00:03:53.993 "compare": false, 00:03:53.993 "compare_and_write": false, 00:03:53.993 "abort": true, 00:03:53.993 "seek_hole": false, 00:03:53.993 "seek_data": false, 00:03:53.993 "copy": true, 00:03:53.993 "nvme_iov_md": false 00:03:53.993 }, 00:03:53.993 "memory_domains": [ 00:03:53.993 { 00:03:53.993 "dma_device_id": "system", 00:03:53.993 "dma_device_type": 1 00:03:53.993 }, 00:03:53.993 { 00:03:53.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.993 "dma_device_type": 2 00:03:53.993 } 00:03:53.993 ], 00:03:53.993 "driver_specific": {} 00:03:53.993 } 00:03:53.993 ]' 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.993 [2024-11-28 02:19:27.592747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:53.993 [2024-11-28 02:19:27.592830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:53.993 [2024-11-28 02:19:27.592857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:03:53.993 [2024-11-28 02:19:27.592869] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:53.993 [2024-11-28 02:19:27.595507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:53.993 [2024-11-28 02:19:27.595559] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:53.993 Passthru0 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:53.993 { 00:03:53.993 "name": "Malloc2", 00:03:53.993 "aliases": [ 00:03:53.993 "49cba9bb-6e71-4307-8f2d-20970799d834" 00:03:53.993 ], 00:03:53.993 "product_name": "Malloc disk", 00:03:53.993 "block_size": 512, 00:03:53.993 "num_blocks": 16384, 00:03:53.993 "uuid": "49cba9bb-6e71-4307-8f2d-20970799d834", 00:03:53.993 "assigned_rate_limits": { 00:03:53.993 "rw_ios_per_sec": 0, 00:03:53.993 "rw_mbytes_per_sec": 0, 00:03:53.993 "r_mbytes_per_sec": 0, 00:03:53.993 "w_mbytes_per_sec": 0 00:03:53.993 }, 00:03:53.993 "claimed": true, 00:03:53.993 "claim_type": "exclusive_write", 00:03:53.993 "zoned": false, 00:03:53.993 "supported_io_types": { 00:03:53.993 "read": true, 00:03:53.993 "write": true, 00:03:53.993 "unmap": true, 00:03:53.993 "flush": true, 00:03:53.993 "reset": true, 00:03:53.993 "nvme_admin": false, 00:03:53.993 "nvme_io": false, 00:03:53.993 "nvme_io_md": false, 00:03:53.993 "write_zeroes": true, 00:03:53.993 "zcopy": true, 00:03:53.993 "get_zone_info": false, 00:03:53.993 "zone_management": false, 00:03:53.993 "zone_append": false, 00:03:53.993 "compare": false, 00:03:53.993 "compare_and_write": false, 00:03:53.993 "abort": true, 00:03:53.993 "seek_hole": false, 00:03:53.993 "seek_data": false, 00:03:53.993 "copy": true, 00:03:53.993 "nvme_iov_md": false 00:03:53.993 }, 00:03:53.993 "memory_domains": [ 00:03:53.993 { 00:03:53.993 "dma_device_id": "system", 00:03:53.993 "dma_device_type": 1 00:03:53.993 }, 00:03:53.993 { 00:03:53.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.993 "dma_device_type": 2 00:03:53.993 } 00:03:53.993 ], 00:03:53.993 "driver_specific": {} 00:03:53.993 }, 00:03:53.993 { 00:03:53.993 "name": "Passthru0", 00:03:53.993 "aliases": [ 00:03:53.993 "ed8c31d5-474f-5ab4-8bff-3c86659564e7" 00:03:53.993 ], 00:03:53.993 "product_name": "passthru", 00:03:53.993 "block_size": 512, 00:03:53.993 "num_blocks": 16384, 00:03:53.993 "uuid": "ed8c31d5-474f-5ab4-8bff-3c86659564e7", 00:03:53.993 "assigned_rate_limits": { 00:03:53.993 "rw_ios_per_sec": 0, 00:03:53.993 "rw_mbytes_per_sec": 0, 00:03:53.993 "r_mbytes_per_sec": 0, 00:03:53.993 "w_mbytes_per_sec": 0 00:03:53.993 }, 00:03:53.993 "claimed": false, 00:03:53.993 "zoned": false, 00:03:53.993 "supported_io_types": { 00:03:53.993 "read": true, 00:03:53.993 "write": true, 00:03:53.993 "unmap": true, 00:03:53.993 "flush": true, 00:03:53.993 "reset": true, 00:03:53.993 "nvme_admin": false, 00:03:53.993 "nvme_io": false, 00:03:53.993 "nvme_io_md": false, 00:03:53.993 "write_zeroes": true, 00:03:53.993 "zcopy": true, 00:03:53.993 "get_zone_info": false, 00:03:53.993 "zone_management": false, 00:03:53.993 "zone_append": false, 00:03:53.993 "compare": false, 00:03:53.993 "compare_and_write": false, 00:03:53.993 "abort": true, 00:03:53.993 "seek_hole": false, 00:03:53.993 "seek_data": false, 00:03:53.993 "copy": true, 00:03:53.993 "nvme_iov_md": false 00:03:53.993 }, 00:03:53.993 "memory_domains": [ 00:03:53.993 { 00:03:53.993 "dma_device_id": "system", 00:03:53.993 "dma_device_type": 1 00:03:53.993 }, 00:03:53.993 { 00:03:53.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.993 "dma_device_type": 2 00:03:53.993 } 00:03:53.993 ], 00:03:53.993 "driver_specific": { 00:03:53.993 "passthru": { 00:03:53.993 "name": "Passthru0", 00:03:53.993 "base_bdev_name": "Malloc2" 00:03:53.993 } 00:03:53.993 } 00:03:53.993 } 00:03:53.993 ]' 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.993 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.252 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.252 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:54.252 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.252 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.253 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.253 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:54.253 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:54.253 02:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:54.253 00:03:54.253 real 0m0.327s 00:03:54.253 user 0m0.182s 00:03:54.253 sys 0m0.045s 00:03:54.253 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.253 02:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.253 ************************************ 00:03:54.253 END TEST rpc_daemon_integrity 00:03:54.253 ************************************ 00:03:54.253 02:19:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:54.253 02:19:27 rpc -- rpc/rpc.sh@84 -- # killprocess 56827 00:03:54.253 02:19:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 56827 ']' 00:03:54.253 02:19:27 rpc -- common/autotest_common.sh@958 -- # kill -0 56827 00:03:54.253 02:19:27 rpc -- common/autotest_common.sh@959 -- # uname 00:03:54.253 02:19:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.253 02:19:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56827 00:03:54.253 02:19:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.253 02:19:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.253 killing process with pid 56827 00:03:54.253 02:19:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56827' 00:03:54.253 02:19:27 rpc -- common/autotest_common.sh@973 -- # kill 56827 00:03:54.253 02:19:27 rpc -- common/autotest_common.sh@978 -- # wait 56827 00:03:56.789 00:03:56.789 real 0m5.252s 00:03:56.789 user 0m5.816s 00:03:56.789 sys 0m0.892s 00:03:56.789 02:19:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.789 02:19:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.789 ************************************ 00:03:56.789 END TEST rpc 00:03:56.789 ************************************ 00:03:56.789 02:19:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:56.789 02:19:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.789 02:19:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.789 02:19:30 -- common/autotest_common.sh@10 -- # set +x 00:03:56.789 ************************************ 00:03:56.789 START TEST skip_rpc 00:03:56.789 ************************************ 00:03:56.789 02:19:30 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:56.789 * Looking for test storage... 00:03:56.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:56.789 02:19:30 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:56.789 02:19:30 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:56.789 02:19:30 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.047 02:19:30 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:57.047 02:19:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.048 02:19:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:57.048 02:19:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:57.048 02:19:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.048 02:19:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:57.048 02:19:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.048 02:19:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.048 02:19:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.048 02:19:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:57.048 02:19:30 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.048 02:19:30 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.048 --rc genhtml_branch_coverage=1 00:03:57.048 --rc genhtml_function_coverage=1 00:03:57.048 --rc genhtml_legend=1 00:03:57.048 --rc geninfo_all_blocks=1 00:03:57.048 --rc geninfo_unexecuted_blocks=1 00:03:57.048 00:03:57.048 ' 00:03:57.048 02:19:30 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.048 --rc genhtml_branch_coverage=1 00:03:57.048 --rc genhtml_function_coverage=1 00:03:57.048 --rc genhtml_legend=1 00:03:57.048 --rc geninfo_all_blocks=1 00:03:57.048 --rc geninfo_unexecuted_blocks=1 00:03:57.048 00:03:57.048 ' 00:03:57.048 02:19:30 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.048 --rc genhtml_branch_coverage=1 00:03:57.048 --rc genhtml_function_coverage=1 00:03:57.048 --rc genhtml_legend=1 00:03:57.048 --rc geninfo_all_blocks=1 00:03:57.048 --rc geninfo_unexecuted_blocks=1 00:03:57.048 00:03:57.048 ' 00:03:57.048 02:19:30 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.048 --rc genhtml_branch_coverage=1 00:03:57.048 --rc genhtml_function_coverage=1 00:03:57.048 --rc genhtml_legend=1 00:03:57.048 --rc geninfo_all_blocks=1 00:03:57.048 --rc geninfo_unexecuted_blocks=1 00:03:57.048 00:03:57.048 ' 00:03:57.048 02:19:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:57.048 02:19:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:57.048 02:19:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:57.048 02:19:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.048 02:19:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.048 02:19:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.048 ************************************ 00:03:57.048 START TEST skip_rpc 00:03:57.048 ************************************ 00:03:57.048 02:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:57.048 02:19:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57056 00:03:57.048 02:19:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:57.048 02:19:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.048 02:19:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:57.048 [2024-11-28 02:19:30.635034] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:03:57.048 [2024-11-28 02:19:30.635187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57056 ] 00:03:57.306 [2024-11-28 02:19:30.809505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.306 [2024-11-28 02:19:30.924194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57056 00:04:02.573 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57056 ']' 00:04:02.574 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57056 00:04:02.574 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:02.574 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.574 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57056 00:04:02.574 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.574 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.574 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57056' 00:04:02.574 killing process with pid 57056 00:04:02.574 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57056 00:04:02.574 02:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57056 00:04:04.477 00:04:04.477 real 0m7.397s 00:04:04.477 user 0m6.915s 00:04:04.477 sys 0m0.398s 00:04:04.477 02:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.477 02:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.477 ************************************ 00:04:04.477 END TEST skip_rpc 00:04:04.477 ************************************ 00:04:04.477 02:19:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:04.477 02:19:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.477 02:19:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.477 02:19:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.477 ************************************ 00:04:04.477 START TEST skip_rpc_with_json 00:04:04.477 ************************************ 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57160 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57160 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57160 ']' 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.477 02:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.477 [2024-11-28 02:19:38.097381] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:04.478 [2024-11-28 02:19:38.097913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57160 ] 00:04:04.737 [2024-11-28 02:19:38.272204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.737 [2024-11-28 02:19:38.383844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.675 [2024-11-28 02:19:39.230336] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:05.675 request: 00:04:05.675 { 00:04:05.675 "trtype": "tcp", 00:04:05.675 "method": "nvmf_get_transports", 00:04:05.675 "req_id": 1 00:04:05.675 } 00:04:05.675 Got JSON-RPC error response 00:04:05.675 response: 00:04:05.675 { 00:04:05.675 "code": -19, 00:04:05.675 "message": "No such device" 00:04:05.675 } 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.675 [2024-11-28 02:19:39.242457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.675 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.936 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.936 02:19:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:05.936 { 00:04:05.936 "subsystems": [ 00:04:05.936 { 00:04:05.936 "subsystem": "fsdev", 00:04:05.936 "config": [ 00:04:05.936 { 00:04:05.936 "method": "fsdev_set_opts", 00:04:05.936 "params": { 00:04:05.936 "fsdev_io_pool_size": 65535, 00:04:05.936 "fsdev_io_cache_size": 256 00:04:05.936 } 00:04:05.936 } 00:04:05.936 ] 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "keyring", 00:04:05.936 "config": [] 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "iobuf", 00:04:05.936 "config": [ 00:04:05.936 { 00:04:05.936 "method": "iobuf_set_options", 00:04:05.936 "params": { 00:04:05.936 "small_pool_count": 8192, 00:04:05.936 "large_pool_count": 1024, 00:04:05.936 "small_bufsize": 8192, 00:04:05.936 "large_bufsize": 135168, 00:04:05.936 "enable_numa": false 00:04:05.936 } 00:04:05.936 } 00:04:05.936 ] 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "sock", 00:04:05.936 "config": [ 00:04:05.936 { 00:04:05.936 "method": "sock_set_default_impl", 00:04:05.936 "params": { 00:04:05.936 "impl_name": "posix" 00:04:05.936 } 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "method": "sock_impl_set_options", 00:04:05.936 "params": { 00:04:05.936 "impl_name": "ssl", 00:04:05.936 "recv_buf_size": 4096, 00:04:05.936 "send_buf_size": 4096, 00:04:05.936 "enable_recv_pipe": true, 00:04:05.936 "enable_quickack": false, 00:04:05.936 "enable_placement_id": 0, 00:04:05.936 "enable_zerocopy_send_server": true, 00:04:05.936 "enable_zerocopy_send_client": false, 00:04:05.936 "zerocopy_threshold": 0, 00:04:05.936 "tls_version": 0, 00:04:05.936 "enable_ktls": false 00:04:05.936 } 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "method": "sock_impl_set_options", 00:04:05.936 "params": { 00:04:05.936 "impl_name": "posix", 00:04:05.936 "recv_buf_size": 2097152, 00:04:05.936 "send_buf_size": 2097152, 00:04:05.936 "enable_recv_pipe": true, 00:04:05.936 "enable_quickack": false, 00:04:05.936 "enable_placement_id": 0, 00:04:05.936 "enable_zerocopy_send_server": true, 00:04:05.936 "enable_zerocopy_send_client": false, 00:04:05.936 "zerocopy_threshold": 0, 00:04:05.936 "tls_version": 0, 00:04:05.936 "enable_ktls": false 00:04:05.936 } 00:04:05.936 } 00:04:05.936 ] 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "vmd", 00:04:05.936 "config": [] 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "accel", 00:04:05.936 "config": [ 00:04:05.936 { 00:04:05.936 "method": "accel_set_options", 00:04:05.936 "params": { 00:04:05.936 "small_cache_size": 128, 00:04:05.936 "large_cache_size": 16, 00:04:05.936 "task_count": 2048, 00:04:05.936 "sequence_count": 2048, 00:04:05.936 "buf_count": 2048 00:04:05.936 } 00:04:05.936 } 00:04:05.936 ] 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "bdev", 00:04:05.936 "config": [ 00:04:05.936 { 00:04:05.936 "method": "bdev_set_options", 00:04:05.936 "params": { 00:04:05.936 "bdev_io_pool_size": 65535, 00:04:05.936 "bdev_io_cache_size": 256, 00:04:05.936 "bdev_auto_examine": true, 00:04:05.936 "iobuf_small_cache_size": 128, 00:04:05.936 "iobuf_large_cache_size": 16 00:04:05.936 } 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "method": "bdev_raid_set_options", 00:04:05.936 "params": { 00:04:05.936 "process_window_size_kb": 1024, 00:04:05.936 "process_max_bandwidth_mb_sec": 0 00:04:05.936 } 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "method": "bdev_iscsi_set_options", 00:04:05.936 "params": { 00:04:05.936 "timeout_sec": 30 00:04:05.936 } 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "method": "bdev_nvme_set_options", 00:04:05.936 "params": { 00:04:05.936 "action_on_timeout": "none", 00:04:05.936 "timeout_us": 0, 00:04:05.936 "timeout_admin_us": 0, 00:04:05.936 "keep_alive_timeout_ms": 10000, 00:04:05.936 "arbitration_burst": 0, 00:04:05.936 "low_priority_weight": 0, 00:04:05.936 "medium_priority_weight": 0, 00:04:05.936 "high_priority_weight": 0, 00:04:05.936 "nvme_adminq_poll_period_us": 10000, 00:04:05.936 "nvme_ioq_poll_period_us": 0, 00:04:05.936 "io_queue_requests": 0, 00:04:05.936 "delay_cmd_submit": true, 00:04:05.936 "transport_retry_count": 4, 00:04:05.936 "bdev_retry_count": 3, 00:04:05.936 "transport_ack_timeout": 0, 00:04:05.936 "ctrlr_loss_timeout_sec": 0, 00:04:05.936 "reconnect_delay_sec": 0, 00:04:05.936 "fast_io_fail_timeout_sec": 0, 00:04:05.936 "disable_auto_failback": false, 00:04:05.936 "generate_uuids": false, 00:04:05.936 "transport_tos": 0, 00:04:05.936 "nvme_error_stat": false, 00:04:05.936 "rdma_srq_size": 0, 00:04:05.936 "io_path_stat": false, 00:04:05.936 "allow_accel_sequence": false, 00:04:05.936 "rdma_max_cq_size": 0, 00:04:05.936 "rdma_cm_event_timeout_ms": 0, 00:04:05.936 "dhchap_digests": [ 00:04:05.936 "sha256", 00:04:05.936 "sha384", 00:04:05.936 "sha512" 00:04:05.936 ], 00:04:05.936 "dhchap_dhgroups": [ 00:04:05.936 "null", 00:04:05.936 "ffdhe2048", 00:04:05.936 "ffdhe3072", 00:04:05.936 "ffdhe4096", 00:04:05.936 "ffdhe6144", 00:04:05.936 "ffdhe8192" 00:04:05.936 ] 00:04:05.936 } 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "method": "bdev_nvme_set_hotplug", 00:04:05.936 "params": { 00:04:05.936 "period_us": 100000, 00:04:05.936 "enable": false 00:04:05.936 } 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "method": "bdev_wait_for_examine" 00:04:05.936 } 00:04:05.936 ] 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "scsi", 00:04:05.936 "config": null 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "scheduler", 00:04:05.936 "config": [ 00:04:05.936 { 00:04:05.936 "method": "framework_set_scheduler", 00:04:05.936 "params": { 00:04:05.936 "name": "static" 00:04:05.936 } 00:04:05.936 } 00:04:05.936 ] 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "vhost_scsi", 00:04:05.936 "config": [] 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "vhost_blk", 00:04:05.936 "config": [] 00:04:05.936 }, 00:04:05.936 { 00:04:05.936 "subsystem": "ublk", 00:04:05.936 "config": [] 00:04:05.936 }, 00:04:05.936 { 00:04:05.937 "subsystem": "nbd", 00:04:05.937 "config": [] 00:04:05.937 }, 00:04:05.937 { 00:04:05.937 "subsystem": "nvmf", 00:04:05.937 "config": [ 00:04:05.937 { 00:04:05.937 "method": "nvmf_set_config", 00:04:05.937 "params": { 00:04:05.937 "discovery_filter": "match_any", 00:04:05.937 "admin_cmd_passthru": { 00:04:05.937 "identify_ctrlr": false 00:04:05.937 }, 00:04:05.937 "dhchap_digests": [ 00:04:05.937 "sha256", 00:04:05.937 "sha384", 00:04:05.937 "sha512" 00:04:05.937 ], 00:04:05.937 "dhchap_dhgroups": [ 00:04:05.937 "null", 00:04:05.937 "ffdhe2048", 00:04:05.937 "ffdhe3072", 00:04:05.937 "ffdhe4096", 00:04:05.937 "ffdhe6144", 00:04:05.937 "ffdhe8192" 00:04:05.937 ] 00:04:05.937 } 00:04:05.937 }, 00:04:05.937 { 00:04:05.937 "method": "nvmf_set_max_subsystems", 00:04:05.937 "params": { 00:04:05.937 "max_subsystems": 1024 00:04:05.937 } 00:04:05.937 }, 00:04:05.937 { 00:04:05.937 "method": "nvmf_set_crdt", 00:04:05.937 "params": { 00:04:05.937 "crdt1": 0, 00:04:05.937 "crdt2": 0, 00:04:05.937 "crdt3": 0 00:04:05.937 } 00:04:05.937 }, 00:04:05.937 { 00:04:05.937 "method": "nvmf_create_transport", 00:04:05.937 "params": { 00:04:05.937 "trtype": "TCP", 00:04:05.937 "max_queue_depth": 128, 00:04:05.937 "max_io_qpairs_per_ctrlr": 127, 00:04:05.937 "in_capsule_data_size": 4096, 00:04:05.937 "max_io_size": 131072, 00:04:05.937 "io_unit_size": 131072, 00:04:05.937 "max_aq_depth": 128, 00:04:05.937 "num_shared_buffers": 511, 00:04:05.937 "buf_cache_size": 4294967295, 00:04:05.937 "dif_insert_or_strip": false, 00:04:05.937 "zcopy": false, 00:04:05.937 "c2h_success": true, 00:04:05.937 "sock_priority": 0, 00:04:05.937 "abort_timeout_sec": 1, 00:04:05.937 "ack_timeout": 0, 00:04:05.937 "data_wr_pool_size": 0 00:04:05.937 } 00:04:05.937 } 00:04:05.937 ] 00:04:05.937 }, 00:04:05.937 { 00:04:05.937 "subsystem": "iscsi", 00:04:05.937 "config": [ 00:04:05.937 { 00:04:05.937 "method": "iscsi_set_options", 00:04:05.937 "params": { 00:04:05.937 "node_base": "iqn.2016-06.io.spdk", 00:04:05.937 "max_sessions": 128, 00:04:05.937 "max_connections_per_session": 2, 00:04:05.937 "max_queue_depth": 64, 00:04:05.937 "default_time2wait": 2, 00:04:05.937 "default_time2retain": 20, 00:04:05.937 "first_burst_length": 8192, 00:04:05.937 "immediate_data": true, 00:04:05.937 "allow_duplicated_isid": false, 00:04:05.937 "error_recovery_level": 0, 00:04:05.937 "nop_timeout": 60, 00:04:05.937 "nop_in_interval": 30, 00:04:05.937 "disable_chap": false, 00:04:05.937 "require_chap": false, 00:04:05.937 "mutual_chap": false, 00:04:05.937 "chap_group": 0, 00:04:05.937 "max_large_datain_per_connection": 64, 00:04:05.937 "max_r2t_per_connection": 4, 00:04:05.937 "pdu_pool_size": 36864, 00:04:05.937 "immediate_data_pool_size": 16384, 00:04:05.937 "data_out_pool_size": 2048 00:04:05.937 } 00:04:05.937 } 00:04:05.937 ] 00:04:05.937 } 00:04:05.937 ] 00:04:05.937 } 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57160 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57160 ']' 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57160 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57160 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.937 killing process with pid 57160 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57160' 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57160 00:04:05.937 02:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57160 00:04:08.476 02:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57216 00:04:08.476 02:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:08.476 02:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57216 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57216 ']' 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57216 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57216 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.757 killing process with pid 57216 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57216' 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57216 00:04:13.757 02:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57216 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:15.667 00:04:15.667 real 0m11.267s 00:04:15.667 user 0m10.721s 00:04:15.667 sys 0m0.849s 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.667 ************************************ 00:04:15.667 END TEST skip_rpc_with_json 00:04:15.667 ************************************ 00:04:15.667 02:19:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:15.667 02:19:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.667 02:19:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.667 02:19:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.667 ************************************ 00:04:15.667 START TEST skip_rpc_with_delay 00:04:15.667 ************************************ 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:15.667 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:15.926 [2024-11-28 02:19:49.445387] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:15.926 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:15.926 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:15.926 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:15.926 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:15.926 00:04:15.926 real 0m0.186s 00:04:15.926 user 0m0.099s 00:04:15.926 sys 0m0.085s 00:04:15.926 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.926 02:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:15.926 ************************************ 00:04:15.926 END TEST skip_rpc_with_delay 00:04:15.926 ************************************ 00:04:15.926 02:19:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:15.926 02:19:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:15.926 02:19:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:15.926 02:19:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.926 02:19:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.926 02:19:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.926 ************************************ 00:04:15.926 START TEST exit_on_failed_rpc_init 00:04:15.926 ************************************ 00:04:15.926 02:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:15.926 02:19:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57350 00:04:15.926 02:19:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.926 02:19:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57350 00:04:15.926 02:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57350 ']' 00:04:15.926 02:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.926 02:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.926 02:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.926 02:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.926 02:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:16.186 [2024-11-28 02:19:49.689043] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:16.186 [2024-11-28 02:19:49.689148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57350 ] 00:04:16.445 [2024-11-28 02:19:49.862858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.445 [2024-11-28 02:19:49.975347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:17.386 02:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.386 [2024-11-28 02:19:50.934437] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:17.386 [2024-11-28 02:19:50.934575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57373 ] 00:04:17.645 [2024-11-28 02:19:51.109804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.645 [2024-11-28 02:19:51.226516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.645 [2024-11-28 02:19:51.226636] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:17.645 [2024-11-28 02:19:51.226651] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:17.645 [2024-11-28 02:19:51.226663] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57350 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57350 ']' 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57350 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57350 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.904 killing process with pid 57350 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57350' 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57350 00:04:17.904 02:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57350 00:04:20.447 00:04:20.447 real 0m4.410s 00:04:20.447 user 0m4.744s 00:04:20.447 sys 0m0.568s 00:04:20.447 02:19:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.447 02:19:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.447 ************************************ 00:04:20.447 END TEST exit_on_failed_rpc_init 00:04:20.447 ************************************ 00:04:20.447 02:19:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:20.447 00:04:20.447 real 0m23.761s 00:04:20.447 user 0m22.690s 00:04:20.447 sys 0m2.210s 00:04:20.447 02:19:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.447 02:19:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.447 ************************************ 00:04:20.447 END TEST skip_rpc 00:04:20.447 ************************************ 00:04:20.447 02:19:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:20.447 02:19:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.447 02:19:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.447 02:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:20.447 ************************************ 00:04:20.447 START TEST rpc_client 00:04:20.447 ************************************ 00:04:20.447 02:19:54 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:20.707 * Looking for test storage... 00:04:20.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:20.707 02:19:54 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.707 02:19:54 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.707 02:19:54 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.707 02:19:54 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:20.707 02:19:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.708 02:19:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:20.708 02:19:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.708 02:19:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:20.708 02:19:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:20.708 02:19:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.708 02:19:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:20.708 02:19:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.708 02:19:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.708 02:19:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.708 02:19:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:20.708 02:19:54 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.708 02:19:54 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.708 --rc genhtml_branch_coverage=1 00:04:20.708 --rc genhtml_function_coverage=1 00:04:20.708 --rc genhtml_legend=1 00:04:20.708 --rc geninfo_all_blocks=1 00:04:20.708 --rc geninfo_unexecuted_blocks=1 00:04:20.708 00:04:20.708 ' 00:04:20.708 02:19:54 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.708 --rc genhtml_branch_coverage=1 00:04:20.708 --rc genhtml_function_coverage=1 00:04:20.708 --rc genhtml_legend=1 00:04:20.708 --rc geninfo_all_blocks=1 00:04:20.708 --rc geninfo_unexecuted_blocks=1 00:04:20.708 00:04:20.708 ' 00:04:20.708 02:19:54 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.708 --rc genhtml_branch_coverage=1 00:04:20.708 --rc genhtml_function_coverage=1 00:04:20.708 --rc genhtml_legend=1 00:04:20.708 --rc geninfo_all_blocks=1 00:04:20.708 --rc geninfo_unexecuted_blocks=1 00:04:20.708 00:04:20.708 ' 00:04:20.708 02:19:54 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.708 --rc genhtml_branch_coverage=1 00:04:20.708 --rc genhtml_function_coverage=1 00:04:20.708 --rc genhtml_legend=1 00:04:20.708 --rc geninfo_all_blocks=1 00:04:20.708 --rc geninfo_unexecuted_blocks=1 00:04:20.708 00:04:20.708 ' 00:04:20.708 02:19:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:20.968 OK 00:04:20.968 02:19:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:20.968 00:04:20.968 real 0m0.294s 00:04:20.968 user 0m0.163s 00:04:20.968 sys 0m0.148s 00:04:20.968 02:19:54 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.968 02:19:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:20.968 ************************************ 00:04:20.968 END TEST rpc_client 00:04:20.968 ************************************ 00:04:20.968 02:19:54 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:20.968 02:19:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.968 02:19:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.968 02:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:20.968 ************************************ 00:04:20.968 START TEST json_config 00:04:20.968 ************************************ 00:04:20.968 02:19:54 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:20.968 02:19:54 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.968 02:19:54 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.968 02:19:54 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.968 02:19:54 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.968 02:19:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.968 02:19:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.968 02:19:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.968 02:19:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.968 02:19:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.968 02:19:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.968 02:19:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.968 02:19:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.968 02:19:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.968 02:19:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.968 02:19:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.968 02:19:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:20.968 02:19:54 json_config -- scripts/common.sh@345 -- # : 1 00:04:20.968 02:19:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.968 02:19:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.968 02:19:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:21.229 02:19:54 json_config -- scripts/common.sh@353 -- # local d=1 00:04:21.229 02:19:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.229 02:19:54 json_config -- scripts/common.sh@355 -- # echo 1 00:04:21.229 02:19:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.229 02:19:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:21.229 02:19:54 json_config -- scripts/common.sh@353 -- # local d=2 00:04:21.229 02:19:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.229 02:19:54 json_config -- scripts/common.sh@355 -- # echo 2 00:04:21.229 02:19:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.229 02:19:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.229 02:19:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.229 02:19:54 json_config -- scripts/common.sh@368 -- # return 0 00:04:21.229 02:19:54 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.229 02:19:54 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.229 --rc genhtml_branch_coverage=1 00:04:21.229 --rc genhtml_function_coverage=1 00:04:21.229 --rc genhtml_legend=1 00:04:21.229 --rc geninfo_all_blocks=1 00:04:21.229 --rc geninfo_unexecuted_blocks=1 00:04:21.229 00:04:21.229 ' 00:04:21.229 02:19:54 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.229 --rc genhtml_branch_coverage=1 00:04:21.229 --rc genhtml_function_coverage=1 00:04:21.229 --rc genhtml_legend=1 00:04:21.229 --rc geninfo_all_blocks=1 00:04:21.229 --rc geninfo_unexecuted_blocks=1 00:04:21.229 00:04:21.229 ' 00:04:21.229 02:19:54 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.229 --rc genhtml_branch_coverage=1 00:04:21.229 --rc genhtml_function_coverage=1 00:04:21.229 --rc genhtml_legend=1 00:04:21.229 --rc geninfo_all_blocks=1 00:04:21.229 --rc geninfo_unexecuted_blocks=1 00:04:21.229 00:04:21.229 ' 00:04:21.229 02:19:54 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.229 --rc genhtml_branch_coverage=1 00:04:21.229 --rc genhtml_function_coverage=1 00:04:21.229 --rc genhtml_legend=1 00:04:21.229 --rc geninfo_all_blocks=1 00:04:21.229 --rc geninfo_unexecuted_blocks=1 00:04:21.229 00:04:21.229 ' 00:04:21.229 02:19:54 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da1044c6-56df-42b4-a1ba-44edbe26f207 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=da1044c6-56df-42b4-a1ba-44edbe26f207 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.229 02:19:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:21.229 02:19:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.229 02:19:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.229 02:19:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.229 02:19:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.229 02:19:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.229 02:19:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.229 02:19:54 json_config -- paths/export.sh@5 -- # export PATH 00:04:21.229 02:19:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@51 -- # : 0 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.229 02:19:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:21.229 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:21.230 02:19:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:21.230 02:19:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:21.230 02:19:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:21.230 02:19:54 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:21.230 02:19:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:21.230 02:19:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:21.230 02:19:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:21.230 02:19:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:21.230 WARNING: No tests are enabled so not running JSON configuration tests 00:04:21.230 02:19:54 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:21.230 02:19:54 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:21.230 00:04:21.230 real 0m0.220s 00:04:21.230 user 0m0.130s 00:04:21.230 sys 0m0.099s 00:04:21.230 02:19:54 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.230 02:19:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.230 ************************************ 00:04:21.230 END TEST json_config 00:04:21.230 ************************************ 00:04:21.230 02:19:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:21.230 02:19:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.230 02:19:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.230 02:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:21.230 ************************************ 00:04:21.230 START TEST json_config_extra_key 00:04:21.230 ************************************ 00:04:21.230 02:19:54 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:21.230 02:19:54 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.230 02:19:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.230 02:19:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.230 02:19:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.230 02:19:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:21.491 02:19:54 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.491 02:19:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.491 --rc genhtml_branch_coverage=1 00:04:21.491 --rc genhtml_function_coverage=1 00:04:21.491 --rc genhtml_legend=1 00:04:21.491 --rc geninfo_all_blocks=1 00:04:21.491 --rc geninfo_unexecuted_blocks=1 00:04:21.491 00:04:21.491 ' 00:04:21.491 02:19:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.491 --rc genhtml_branch_coverage=1 00:04:21.491 --rc genhtml_function_coverage=1 00:04:21.491 --rc genhtml_legend=1 00:04:21.491 --rc geninfo_all_blocks=1 00:04:21.491 --rc geninfo_unexecuted_blocks=1 00:04:21.491 00:04:21.491 ' 00:04:21.491 02:19:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.491 --rc genhtml_branch_coverage=1 00:04:21.491 --rc genhtml_function_coverage=1 00:04:21.491 --rc genhtml_legend=1 00:04:21.491 --rc geninfo_all_blocks=1 00:04:21.491 --rc geninfo_unexecuted_blocks=1 00:04:21.491 00:04:21.491 ' 00:04:21.491 02:19:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.491 --rc genhtml_branch_coverage=1 00:04:21.491 --rc genhtml_function_coverage=1 00:04:21.491 --rc genhtml_legend=1 00:04:21.491 --rc geninfo_all_blocks=1 00:04:21.491 --rc geninfo_unexecuted_blocks=1 00:04:21.491 00:04:21.491 ' 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da1044c6-56df-42b4-a1ba-44edbe26f207 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=da1044c6-56df-42b4-a1ba-44edbe26f207 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.491 02:19:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.491 02:19:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.491 02:19:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.491 02:19:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.491 02:19:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:21.491 02:19:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:21.491 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:21.491 02:19:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:21.491 INFO: launching applications... 00:04:21.491 02:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57583 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.491 Waiting for target to run... 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57583 /var/tmp/spdk_tgt.sock 00:04:21.491 02:19:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:21.491 02:19:54 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57583 ']' 00:04:21.491 02:19:54 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.492 02:19:54 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.492 02:19:54 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.492 02:19:54 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.492 02:19:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:21.492 [2024-11-28 02:19:55.066859] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:21.492 [2024-11-28 02:19:55.067021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57583 ] 00:04:22.059 [2024-11-28 02:19:55.461881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.059 [2024-11-28 02:19:55.567526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.000 00:04:23.000 INFO: shutting down applications... 00:04:23.000 02:19:56 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.000 02:19:56 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:23.000 02:19:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:23.000 02:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:23.000 02:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:23.000 02:19:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:23.000 02:19:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.000 02:19:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57583 ]] 00:04:23.000 02:19:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57583 00:04:23.000 02:19:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.000 02:19:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.000 02:19:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57583 00:04:23.000 02:19:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.260 02:19:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.260 02:19:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.260 02:19:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57583 00:04:23.260 02:19:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.831 02:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.831 02:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.831 02:19:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57583 00:04:23.831 02:19:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.400 02:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.400 02:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.400 02:19:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57583 00:04:24.400 02:19:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.968 02:19:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.968 02:19:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.968 02:19:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57583 00:04:24.968 02:19:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.227 02:19:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.227 02:19:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.227 02:19:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57583 00:04:25.227 02:19:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.793 02:19:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.793 02:19:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.793 02:19:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57583 00:04:25.793 02:19:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:25.793 02:19:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:25.793 02:19:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:25.793 SPDK target shutdown done 00:04:25.793 02:19:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:25.793 Success 00:04:25.793 02:19:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:25.793 00:04:25.793 real 0m4.616s 00:04:25.793 user 0m4.250s 00:04:25.793 sys 0m0.557s 00:04:25.793 02:19:59 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.793 02:19:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:25.793 ************************************ 00:04:25.793 END TEST json_config_extra_key 00:04:25.793 ************************************ 00:04:25.793 02:19:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:25.793 02:19:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.793 02:19:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.793 02:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:25.793 ************************************ 00:04:25.793 START TEST alias_rpc 00:04:25.793 ************************************ 00:04:25.793 02:19:59 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.052 * Looking for test storage... 00:04:26.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:26.052 02:19:59 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.052 02:19:59 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.052 02:19:59 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.052 02:19:59 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.052 02:19:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.053 02:19:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.053 --rc genhtml_branch_coverage=1 00:04:26.053 --rc genhtml_function_coverage=1 00:04:26.053 --rc genhtml_legend=1 00:04:26.053 --rc geninfo_all_blocks=1 00:04:26.053 --rc geninfo_unexecuted_blocks=1 00:04:26.053 00:04:26.053 ' 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.053 --rc genhtml_branch_coverage=1 00:04:26.053 --rc genhtml_function_coverage=1 00:04:26.053 --rc genhtml_legend=1 00:04:26.053 --rc geninfo_all_blocks=1 00:04:26.053 --rc geninfo_unexecuted_blocks=1 00:04:26.053 00:04:26.053 ' 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.053 --rc genhtml_branch_coverage=1 00:04:26.053 --rc genhtml_function_coverage=1 00:04:26.053 --rc genhtml_legend=1 00:04:26.053 --rc geninfo_all_blocks=1 00:04:26.053 --rc geninfo_unexecuted_blocks=1 00:04:26.053 00:04:26.053 ' 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.053 --rc genhtml_branch_coverage=1 00:04:26.053 --rc genhtml_function_coverage=1 00:04:26.053 --rc genhtml_legend=1 00:04:26.053 --rc geninfo_all_blocks=1 00:04:26.053 --rc geninfo_unexecuted_blocks=1 00:04:26.053 00:04:26.053 ' 00:04:26.053 02:19:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:26.053 02:19:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57689 00:04:26.053 02:19:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.053 02:19:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57689 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57689 ']' 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.053 02:19:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.311 [2024-11-28 02:19:59.750296] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:26.311 [2024-11-28 02:19:59.750429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57689 ] 00:04:26.311 [2024-11-28 02:19:59.925898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.568 [2024-11-28 02:20:00.040834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.505 02:20:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.505 02:20:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:27.505 02:20:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:27.505 02:20:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57689 00:04:27.505 02:20:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57689 ']' 00:04:27.505 02:20:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57689 00:04:27.506 02:20:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:27.506 02:20:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.506 02:20:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57689 00:04:27.506 02:20:01 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.506 02:20:01 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.506 killing process with pid 57689 00:04:27.506 02:20:01 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57689' 00:04:27.506 02:20:01 alias_rpc -- common/autotest_common.sh@973 -- # kill 57689 00:04:27.506 02:20:01 alias_rpc -- common/autotest_common.sh@978 -- # wait 57689 00:04:30.075 ************************************ 00:04:30.075 END TEST alias_rpc 00:04:30.075 ************************************ 00:04:30.075 00:04:30.075 real 0m4.018s 00:04:30.075 user 0m4.012s 00:04:30.075 sys 0m0.560s 00:04:30.075 02:20:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.075 02:20:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.075 02:20:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:30.075 02:20:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:30.075 02:20:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.075 02:20:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.075 02:20:03 -- common/autotest_common.sh@10 -- # set +x 00:04:30.075 ************************************ 00:04:30.075 START TEST spdkcli_tcp 00:04:30.075 ************************************ 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:30.075 * Looking for test storage... 00:04:30.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.075 02:20:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.075 --rc genhtml_branch_coverage=1 00:04:30.075 --rc genhtml_function_coverage=1 00:04:30.075 --rc genhtml_legend=1 00:04:30.075 --rc geninfo_all_blocks=1 00:04:30.075 --rc geninfo_unexecuted_blocks=1 00:04:30.075 00:04:30.075 ' 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.075 --rc genhtml_branch_coverage=1 00:04:30.075 --rc genhtml_function_coverage=1 00:04:30.075 --rc genhtml_legend=1 00:04:30.075 --rc geninfo_all_blocks=1 00:04:30.075 --rc geninfo_unexecuted_blocks=1 00:04:30.075 00:04:30.075 ' 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.075 --rc genhtml_branch_coverage=1 00:04:30.075 --rc genhtml_function_coverage=1 00:04:30.075 --rc genhtml_legend=1 00:04:30.075 --rc geninfo_all_blocks=1 00:04:30.075 --rc geninfo_unexecuted_blocks=1 00:04:30.075 00:04:30.075 ' 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.075 --rc genhtml_branch_coverage=1 00:04:30.075 --rc genhtml_function_coverage=1 00:04:30.075 --rc genhtml_legend=1 00:04:30.075 --rc geninfo_all_blocks=1 00:04:30.075 --rc geninfo_unexecuted_blocks=1 00:04:30.075 00:04:30.075 ' 00:04:30.075 02:20:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:30.075 02:20:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:30.075 02:20:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:30.075 02:20:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:30.075 02:20:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:30.075 02:20:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:30.075 02:20:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.075 02:20:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57796 00:04:30.075 02:20:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57796 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57796 ']' 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.075 02:20:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:30.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.075 02:20:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.334 [2024-11-28 02:20:03.822204] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:30.334 [2024-11-28 02:20:03.822323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57796 ] 00:04:30.334 [2024-11-28 02:20:03.996590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.593 [2024-11-28 02:20:04.110001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.593 [2024-11-28 02:20:04.110039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.531 02:20:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.531 02:20:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:31.531 02:20:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:31.532 02:20:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57813 00:04:31.532 02:20:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:31.532 [ 00:04:31.532 "bdev_malloc_delete", 00:04:31.532 "bdev_malloc_create", 00:04:31.532 "bdev_null_resize", 00:04:31.532 "bdev_null_delete", 00:04:31.532 "bdev_null_create", 00:04:31.532 "bdev_nvme_cuse_unregister", 00:04:31.532 "bdev_nvme_cuse_register", 00:04:31.532 "bdev_opal_new_user", 00:04:31.532 "bdev_opal_set_lock_state", 00:04:31.532 "bdev_opal_delete", 00:04:31.532 "bdev_opal_get_info", 00:04:31.532 "bdev_opal_create", 00:04:31.532 "bdev_nvme_opal_revert", 00:04:31.532 "bdev_nvme_opal_init", 00:04:31.532 "bdev_nvme_send_cmd", 00:04:31.532 "bdev_nvme_set_keys", 00:04:31.532 "bdev_nvme_get_path_iostat", 00:04:31.532 "bdev_nvme_get_mdns_discovery_info", 00:04:31.532 "bdev_nvme_stop_mdns_discovery", 00:04:31.532 "bdev_nvme_start_mdns_discovery", 00:04:31.532 "bdev_nvme_set_multipath_policy", 00:04:31.532 "bdev_nvme_set_preferred_path", 00:04:31.532 "bdev_nvme_get_io_paths", 00:04:31.532 "bdev_nvme_remove_error_injection", 00:04:31.532 "bdev_nvme_add_error_injection", 00:04:31.532 "bdev_nvme_get_discovery_info", 00:04:31.532 "bdev_nvme_stop_discovery", 00:04:31.532 "bdev_nvme_start_discovery", 00:04:31.532 "bdev_nvme_get_controller_health_info", 00:04:31.532 "bdev_nvme_disable_controller", 00:04:31.532 "bdev_nvme_enable_controller", 00:04:31.532 "bdev_nvme_reset_controller", 00:04:31.532 "bdev_nvme_get_transport_statistics", 00:04:31.532 "bdev_nvme_apply_firmware", 00:04:31.532 "bdev_nvme_detach_controller", 00:04:31.532 "bdev_nvme_get_controllers", 00:04:31.532 "bdev_nvme_attach_controller", 00:04:31.532 "bdev_nvme_set_hotplug", 00:04:31.532 "bdev_nvme_set_options", 00:04:31.532 "bdev_passthru_delete", 00:04:31.532 "bdev_passthru_create", 00:04:31.532 "bdev_lvol_set_parent_bdev", 00:04:31.532 "bdev_lvol_set_parent", 00:04:31.532 "bdev_lvol_check_shallow_copy", 00:04:31.532 "bdev_lvol_start_shallow_copy", 00:04:31.532 "bdev_lvol_grow_lvstore", 00:04:31.532 "bdev_lvol_get_lvols", 00:04:31.532 "bdev_lvol_get_lvstores", 00:04:31.532 "bdev_lvol_delete", 00:04:31.532 "bdev_lvol_set_read_only", 00:04:31.532 "bdev_lvol_resize", 00:04:31.532 "bdev_lvol_decouple_parent", 00:04:31.532 "bdev_lvol_inflate", 00:04:31.532 "bdev_lvol_rename", 00:04:31.532 "bdev_lvol_clone_bdev", 00:04:31.532 "bdev_lvol_clone", 00:04:31.532 "bdev_lvol_snapshot", 00:04:31.532 "bdev_lvol_create", 00:04:31.532 "bdev_lvol_delete_lvstore", 00:04:31.532 "bdev_lvol_rename_lvstore", 00:04:31.532 "bdev_lvol_create_lvstore", 00:04:31.532 "bdev_raid_set_options", 00:04:31.532 "bdev_raid_remove_base_bdev", 00:04:31.532 "bdev_raid_add_base_bdev", 00:04:31.532 "bdev_raid_delete", 00:04:31.532 "bdev_raid_create", 00:04:31.532 "bdev_raid_get_bdevs", 00:04:31.532 "bdev_error_inject_error", 00:04:31.532 "bdev_error_delete", 00:04:31.532 "bdev_error_create", 00:04:31.532 "bdev_split_delete", 00:04:31.532 "bdev_split_create", 00:04:31.532 "bdev_delay_delete", 00:04:31.532 "bdev_delay_create", 00:04:31.532 "bdev_delay_update_latency", 00:04:31.532 "bdev_zone_block_delete", 00:04:31.532 "bdev_zone_block_create", 00:04:31.532 "blobfs_create", 00:04:31.532 "blobfs_detect", 00:04:31.532 "blobfs_set_cache_size", 00:04:31.532 "bdev_aio_delete", 00:04:31.532 "bdev_aio_rescan", 00:04:31.532 "bdev_aio_create", 00:04:31.532 "bdev_ftl_set_property", 00:04:31.532 "bdev_ftl_get_properties", 00:04:31.532 "bdev_ftl_get_stats", 00:04:31.532 "bdev_ftl_unmap", 00:04:31.532 "bdev_ftl_unload", 00:04:31.532 "bdev_ftl_delete", 00:04:31.532 "bdev_ftl_load", 00:04:31.532 "bdev_ftl_create", 00:04:31.532 "bdev_virtio_attach_controller", 00:04:31.532 "bdev_virtio_scsi_get_devices", 00:04:31.532 "bdev_virtio_detach_controller", 00:04:31.532 "bdev_virtio_blk_set_hotplug", 00:04:31.532 "bdev_iscsi_delete", 00:04:31.532 "bdev_iscsi_create", 00:04:31.532 "bdev_iscsi_set_options", 00:04:31.532 "accel_error_inject_error", 00:04:31.532 "ioat_scan_accel_module", 00:04:31.532 "dsa_scan_accel_module", 00:04:31.532 "iaa_scan_accel_module", 00:04:31.532 "keyring_file_remove_key", 00:04:31.532 "keyring_file_add_key", 00:04:31.532 "keyring_linux_set_options", 00:04:31.532 "fsdev_aio_delete", 00:04:31.532 "fsdev_aio_create", 00:04:31.532 "iscsi_get_histogram", 00:04:31.532 "iscsi_enable_histogram", 00:04:31.532 "iscsi_set_options", 00:04:31.532 "iscsi_get_auth_groups", 00:04:31.532 "iscsi_auth_group_remove_secret", 00:04:31.532 "iscsi_auth_group_add_secret", 00:04:31.532 "iscsi_delete_auth_group", 00:04:31.532 "iscsi_create_auth_group", 00:04:31.532 "iscsi_set_discovery_auth", 00:04:31.532 "iscsi_get_options", 00:04:31.532 "iscsi_target_node_request_logout", 00:04:31.532 "iscsi_target_node_set_redirect", 00:04:31.532 "iscsi_target_node_set_auth", 00:04:31.532 "iscsi_target_node_add_lun", 00:04:31.532 "iscsi_get_stats", 00:04:31.532 "iscsi_get_connections", 00:04:31.532 "iscsi_portal_group_set_auth", 00:04:31.532 "iscsi_start_portal_group", 00:04:31.532 "iscsi_delete_portal_group", 00:04:31.532 "iscsi_create_portal_group", 00:04:31.532 "iscsi_get_portal_groups", 00:04:31.532 "iscsi_delete_target_node", 00:04:31.532 "iscsi_target_node_remove_pg_ig_maps", 00:04:31.532 "iscsi_target_node_add_pg_ig_maps", 00:04:31.532 "iscsi_create_target_node", 00:04:31.532 "iscsi_get_target_nodes", 00:04:31.532 "iscsi_delete_initiator_group", 00:04:31.532 "iscsi_initiator_group_remove_initiators", 00:04:31.532 "iscsi_initiator_group_add_initiators", 00:04:31.532 "iscsi_create_initiator_group", 00:04:31.532 "iscsi_get_initiator_groups", 00:04:31.532 "nvmf_set_crdt", 00:04:31.532 "nvmf_set_config", 00:04:31.532 "nvmf_set_max_subsystems", 00:04:31.532 "nvmf_stop_mdns_prr", 00:04:31.532 "nvmf_publish_mdns_prr", 00:04:31.532 "nvmf_subsystem_get_listeners", 00:04:31.532 "nvmf_subsystem_get_qpairs", 00:04:31.532 "nvmf_subsystem_get_controllers", 00:04:31.532 "nvmf_get_stats", 00:04:31.532 "nvmf_get_transports", 00:04:31.532 "nvmf_create_transport", 00:04:31.532 "nvmf_get_targets", 00:04:31.532 "nvmf_delete_target", 00:04:31.532 "nvmf_create_target", 00:04:31.532 "nvmf_subsystem_allow_any_host", 00:04:31.532 "nvmf_subsystem_set_keys", 00:04:31.532 "nvmf_subsystem_remove_host", 00:04:31.532 "nvmf_subsystem_add_host", 00:04:31.532 "nvmf_ns_remove_host", 00:04:31.532 "nvmf_ns_add_host", 00:04:31.532 "nvmf_subsystem_remove_ns", 00:04:31.532 "nvmf_subsystem_set_ns_ana_group", 00:04:31.532 "nvmf_subsystem_add_ns", 00:04:31.532 "nvmf_subsystem_listener_set_ana_state", 00:04:31.532 "nvmf_discovery_get_referrals", 00:04:31.532 "nvmf_discovery_remove_referral", 00:04:31.532 "nvmf_discovery_add_referral", 00:04:31.532 "nvmf_subsystem_remove_listener", 00:04:31.532 "nvmf_subsystem_add_listener", 00:04:31.532 "nvmf_delete_subsystem", 00:04:31.532 "nvmf_create_subsystem", 00:04:31.532 "nvmf_get_subsystems", 00:04:31.532 "env_dpdk_get_mem_stats", 00:04:31.532 "nbd_get_disks", 00:04:31.532 "nbd_stop_disk", 00:04:31.532 "nbd_start_disk", 00:04:31.532 "ublk_recover_disk", 00:04:31.532 "ublk_get_disks", 00:04:31.532 "ublk_stop_disk", 00:04:31.532 "ublk_start_disk", 00:04:31.532 "ublk_destroy_target", 00:04:31.532 "ublk_create_target", 00:04:31.532 "virtio_blk_create_transport", 00:04:31.532 "virtio_blk_get_transports", 00:04:31.532 "vhost_controller_set_coalescing", 00:04:31.532 "vhost_get_controllers", 00:04:31.532 "vhost_delete_controller", 00:04:31.532 "vhost_create_blk_controller", 00:04:31.532 "vhost_scsi_controller_remove_target", 00:04:31.532 "vhost_scsi_controller_add_target", 00:04:31.532 "vhost_start_scsi_controller", 00:04:31.532 "vhost_create_scsi_controller", 00:04:31.532 "thread_set_cpumask", 00:04:31.532 "scheduler_set_options", 00:04:31.532 "framework_get_governor", 00:04:31.532 "framework_get_scheduler", 00:04:31.532 "framework_set_scheduler", 00:04:31.532 "framework_get_reactors", 00:04:31.532 "thread_get_io_channels", 00:04:31.532 "thread_get_pollers", 00:04:31.532 "thread_get_stats", 00:04:31.532 "framework_monitor_context_switch", 00:04:31.532 "spdk_kill_instance", 00:04:31.532 "log_enable_timestamps", 00:04:31.532 "log_get_flags", 00:04:31.532 "log_clear_flag", 00:04:31.532 "log_set_flag", 00:04:31.532 "log_get_level", 00:04:31.532 "log_set_level", 00:04:31.532 "log_get_print_level", 00:04:31.533 "log_set_print_level", 00:04:31.533 "framework_enable_cpumask_locks", 00:04:31.533 "framework_disable_cpumask_locks", 00:04:31.533 "framework_wait_init", 00:04:31.533 "framework_start_init", 00:04:31.533 "scsi_get_devices", 00:04:31.533 "bdev_get_histogram", 00:04:31.533 "bdev_enable_histogram", 00:04:31.533 "bdev_set_qos_limit", 00:04:31.533 "bdev_set_qd_sampling_period", 00:04:31.533 "bdev_get_bdevs", 00:04:31.533 "bdev_reset_iostat", 00:04:31.533 "bdev_get_iostat", 00:04:31.533 "bdev_examine", 00:04:31.533 "bdev_wait_for_examine", 00:04:31.533 "bdev_set_options", 00:04:31.533 "accel_get_stats", 00:04:31.533 "accel_set_options", 00:04:31.533 "accel_set_driver", 00:04:31.533 "accel_crypto_key_destroy", 00:04:31.533 "accel_crypto_keys_get", 00:04:31.533 "accel_crypto_key_create", 00:04:31.533 "accel_assign_opc", 00:04:31.533 "accel_get_module_info", 00:04:31.533 "accel_get_opc_assignments", 00:04:31.533 "vmd_rescan", 00:04:31.533 "vmd_remove_device", 00:04:31.533 "vmd_enable", 00:04:31.533 "sock_get_default_impl", 00:04:31.533 "sock_set_default_impl", 00:04:31.533 "sock_impl_set_options", 00:04:31.533 "sock_impl_get_options", 00:04:31.533 "iobuf_get_stats", 00:04:31.533 "iobuf_set_options", 00:04:31.533 "keyring_get_keys", 00:04:31.533 "framework_get_pci_devices", 00:04:31.533 "framework_get_config", 00:04:31.533 "framework_get_subsystems", 00:04:31.533 "fsdev_set_opts", 00:04:31.533 "fsdev_get_opts", 00:04:31.533 "trace_get_info", 00:04:31.533 "trace_get_tpoint_group_mask", 00:04:31.533 "trace_disable_tpoint_group", 00:04:31.533 "trace_enable_tpoint_group", 00:04:31.533 "trace_clear_tpoint_mask", 00:04:31.533 "trace_set_tpoint_mask", 00:04:31.533 "notify_get_notifications", 00:04:31.533 "notify_get_types", 00:04:31.533 "spdk_get_version", 00:04:31.533 "rpc_get_methods" 00:04:31.533 ] 00:04:31.533 02:20:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:31.533 02:20:05 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.533 02:20:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.533 02:20:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:31.533 02:20:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57796 00:04:31.533 02:20:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57796 ']' 00:04:31.533 02:20:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57796 00:04:31.533 02:20:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:31.533 02:20:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.791 02:20:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57796 00:04:31.791 02:20:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.791 02:20:05 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.791 killing process with pid 57796 00:04:31.791 02:20:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57796' 00:04:31.791 02:20:05 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57796 00:04:31.791 02:20:05 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57796 00:04:34.327 00:04:34.327 real 0m4.109s 00:04:34.327 user 0m7.395s 00:04:34.327 sys 0m0.607s 00:04:34.327 02:20:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.327 02:20:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:34.327 ************************************ 00:04:34.327 END TEST spdkcli_tcp 00:04:34.327 ************************************ 00:04:34.327 02:20:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:34.327 02:20:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.327 02:20:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.327 02:20:07 -- common/autotest_common.sh@10 -- # set +x 00:04:34.327 ************************************ 00:04:34.327 START TEST dpdk_mem_utility 00:04:34.327 ************************************ 00:04:34.327 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:34.327 * Looking for test storage... 00:04:34.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:34.327 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.327 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.327 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.327 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.327 02:20:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.327 02:20:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.327 02:20:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.327 02:20:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.327 02:20:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.328 02:20:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.328 --rc genhtml_branch_coverage=1 00:04:34.328 --rc genhtml_function_coverage=1 00:04:34.328 --rc genhtml_legend=1 00:04:34.328 --rc geninfo_all_blocks=1 00:04:34.328 --rc geninfo_unexecuted_blocks=1 00:04:34.328 00:04:34.328 ' 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.328 --rc genhtml_branch_coverage=1 00:04:34.328 --rc genhtml_function_coverage=1 00:04:34.328 --rc genhtml_legend=1 00:04:34.328 --rc geninfo_all_blocks=1 00:04:34.328 --rc geninfo_unexecuted_blocks=1 00:04:34.328 00:04:34.328 ' 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.328 --rc genhtml_branch_coverage=1 00:04:34.328 --rc genhtml_function_coverage=1 00:04:34.328 --rc genhtml_legend=1 00:04:34.328 --rc geninfo_all_blocks=1 00:04:34.328 --rc geninfo_unexecuted_blocks=1 00:04:34.328 00:04:34.328 ' 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.328 --rc genhtml_branch_coverage=1 00:04:34.328 --rc genhtml_function_coverage=1 00:04:34.328 --rc genhtml_legend=1 00:04:34.328 --rc geninfo_all_blocks=1 00:04:34.328 --rc geninfo_unexecuted_blocks=1 00:04:34.328 00:04:34.328 ' 00:04:34.328 02:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:34.328 02:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57918 00:04:34.328 02:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.328 02:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57918 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57918 ']' 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.328 02:20:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.588 [2024-11-28 02:20:08.004344] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:34.588 [2024-11-28 02:20:08.004485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57918 ] 00:04:34.588 [2024-11-28 02:20:08.177443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.847 [2024-11-28 02:20:08.280418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.789 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.789 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:35.789 02:20:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:35.789 02:20:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:35.789 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.789 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:35.789 { 00:04:35.789 "filename": "/tmp/spdk_mem_dump.txt" 00:04:35.789 } 00:04:35.789 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.789 02:20:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:35.789 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:35.789 1 heaps totaling size 824.000000 MiB 00:04:35.789 size: 824.000000 MiB heap id: 0 00:04:35.789 end heaps---------- 00:04:35.789 9 mempools totaling size 603.782043 MiB 00:04:35.789 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:35.789 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:35.789 size: 100.555481 MiB name: bdev_io_57918 00:04:35.789 size: 50.003479 MiB name: msgpool_57918 00:04:35.789 size: 36.509338 MiB name: fsdev_io_57918 00:04:35.789 size: 21.763794 MiB name: PDU_Pool 00:04:35.789 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:35.789 size: 4.133484 MiB name: evtpool_57918 00:04:35.789 size: 0.026123 MiB name: Session_Pool 00:04:35.789 end mempools------- 00:04:35.789 6 memzones totaling size 4.142822 MiB 00:04:35.789 size: 1.000366 MiB name: RG_ring_0_57918 00:04:35.789 size: 1.000366 MiB name: RG_ring_1_57918 00:04:35.789 size: 1.000366 MiB name: RG_ring_4_57918 00:04:35.789 size: 1.000366 MiB name: RG_ring_5_57918 00:04:35.789 size: 0.125366 MiB name: RG_ring_2_57918 00:04:35.789 size: 0.015991 MiB name: RG_ring_3_57918 00:04:35.789 end memzones------- 00:04:35.789 02:20:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:35.789 heap id: 0 total size: 824.000000 MiB number of busy elements: 319 number of free elements: 18 00:04:35.789 list of free elements. size: 16.780396 MiB 00:04:35.789 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:35.789 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:35.789 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:35.789 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:35.789 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:35.789 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:35.789 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:35.789 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:35.789 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:35.789 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:35.789 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:35.789 element at address: 0x20001b400000 with size: 0.561951 MiB 00:04:35.789 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:35.789 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:35.789 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:35.789 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:35.789 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:35.789 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:35.789 list of standard malloc elements. size: 199.288696 MiB 00:04:35.789 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:35.789 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:35.789 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:35.789 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:35.789 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:35.789 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:35.789 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:35.789 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:35.789 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:35.789 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:35.789 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:35.789 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:35.789 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:35.789 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:35.789 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:35.790 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:35.790 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:35.791 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:35.791 list of memzone associated elements. size: 607.930908 MiB 00:04:35.791 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:35.791 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:35.791 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:35.791 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:35.791 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:35.791 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57918_0 00:04:35.791 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:35.791 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57918_0 00:04:35.791 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:35.791 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57918_0 00:04:35.791 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:35.791 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:35.791 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:35.791 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:35.791 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:35.791 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57918_0 00:04:35.791 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:35.791 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57918 00:04:35.791 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:35.791 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57918 00:04:35.791 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:35.791 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:35.791 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:35.791 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:35.791 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:35.791 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:35.791 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:35.791 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:35.791 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:35.791 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57918 00:04:35.791 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:35.791 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57918 00:04:35.791 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:35.791 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57918 00:04:35.791 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:35.791 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57918 00:04:35.791 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:35.791 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57918 00:04:35.791 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:35.791 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57918 00:04:35.791 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:35.791 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:35.791 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:35.791 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:35.791 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:35.791 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:35.791 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:35.791 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57918 00:04:35.791 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:35.791 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57918 00:04:35.791 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:35.791 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:35.791 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:35.791 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:35.791 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:35.791 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57918 00:04:35.791 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:35.791 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:35.791 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:35.791 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57918 00:04:35.791 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:35.791 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57918 00:04:35.791 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:35.791 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57918 00:04:35.791 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:35.791 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:35.791 02:20:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:35.791 02:20:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57918 00:04:35.791 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57918 ']' 00:04:35.791 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57918 00:04:35.792 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:35.792 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.792 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57918 00:04:35.792 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.792 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.792 killing process with pid 57918 00:04:35.792 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57918' 00:04:35.792 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57918 00:04:35.792 02:20:09 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57918 00:04:38.332 00:04:38.332 real 0m3.942s 00:04:38.332 user 0m3.869s 00:04:38.333 sys 0m0.551s 00:04:38.333 02:20:11 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.333 02:20:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.333 ************************************ 00:04:38.333 END TEST dpdk_mem_utility 00:04:38.333 ************************************ 00:04:38.333 02:20:11 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.333 02:20:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.333 02:20:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.333 02:20:11 -- common/autotest_common.sh@10 -- # set +x 00:04:38.333 ************************************ 00:04:38.333 START TEST event 00:04:38.333 ************************************ 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.333 * Looking for test storage... 00:04:38.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.333 02:20:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.333 02:20:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.333 02:20:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.333 02:20:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.333 02:20:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.333 02:20:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.333 02:20:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.333 02:20:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.333 02:20:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.333 02:20:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.333 02:20:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.333 02:20:11 event -- scripts/common.sh@344 -- # case "$op" in 00:04:38.333 02:20:11 event -- scripts/common.sh@345 -- # : 1 00:04:38.333 02:20:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.333 02:20:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.333 02:20:11 event -- scripts/common.sh@365 -- # decimal 1 00:04:38.333 02:20:11 event -- scripts/common.sh@353 -- # local d=1 00:04:38.333 02:20:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.333 02:20:11 event -- scripts/common.sh@355 -- # echo 1 00:04:38.333 02:20:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.333 02:20:11 event -- scripts/common.sh@366 -- # decimal 2 00:04:38.333 02:20:11 event -- scripts/common.sh@353 -- # local d=2 00:04:38.333 02:20:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.333 02:20:11 event -- scripts/common.sh@355 -- # echo 2 00:04:38.333 02:20:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.333 02:20:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.333 02:20:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.333 02:20:11 event -- scripts/common.sh@368 -- # return 0 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.333 --rc genhtml_branch_coverage=1 00:04:38.333 --rc genhtml_function_coverage=1 00:04:38.333 --rc genhtml_legend=1 00:04:38.333 --rc geninfo_all_blocks=1 00:04:38.333 --rc geninfo_unexecuted_blocks=1 00:04:38.333 00:04:38.333 ' 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.333 --rc genhtml_branch_coverage=1 00:04:38.333 --rc genhtml_function_coverage=1 00:04:38.333 --rc genhtml_legend=1 00:04:38.333 --rc geninfo_all_blocks=1 00:04:38.333 --rc geninfo_unexecuted_blocks=1 00:04:38.333 00:04:38.333 ' 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.333 --rc genhtml_branch_coverage=1 00:04:38.333 --rc genhtml_function_coverage=1 00:04:38.333 --rc genhtml_legend=1 00:04:38.333 --rc geninfo_all_blocks=1 00:04:38.333 --rc geninfo_unexecuted_blocks=1 00:04:38.333 00:04:38.333 ' 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.333 --rc genhtml_branch_coverage=1 00:04:38.333 --rc genhtml_function_coverage=1 00:04:38.333 --rc genhtml_legend=1 00:04:38.333 --rc geninfo_all_blocks=1 00:04:38.333 --rc geninfo_unexecuted_blocks=1 00:04:38.333 00:04:38.333 ' 00:04:38.333 02:20:11 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:38.333 02:20:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:38.333 02:20:11 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:38.333 02:20:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.333 02:20:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.333 ************************************ 00:04:38.333 START TEST event_perf 00:04:38.333 ************************************ 00:04:38.333 02:20:11 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.333 Running I/O for 1 seconds...[2024-11-28 02:20:11.964260] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:38.333 [2024-11-28 02:20:11.964373] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58026 ] 00:04:38.593 [2024-11-28 02:20:12.139337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.593 [2024-11-28 02:20:12.255899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.593 [2024-11-28 02:20:12.256110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.593 [2024-11-28 02:20:12.256073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.593 Running I/O for 1 seconds...[2024-11-28 02:20:12.256158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:39.976 00:04:39.976 lcore 0: 212966 00:04:39.976 lcore 1: 212965 00:04:39.976 lcore 2: 212964 00:04:39.976 lcore 3: 212965 00:04:39.976 done. 00:04:39.976 00:04:39.976 real 0m1.578s 00:04:39.976 user 0m4.346s 00:04:39.976 sys 0m0.112s 00:04:39.976 02:20:13 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.976 02:20:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.976 ************************************ 00:04:39.976 END TEST event_perf 00:04:39.976 ************************************ 00:04:39.976 02:20:13 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:39.976 02:20:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:39.976 02:20:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.976 02:20:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.976 ************************************ 00:04:39.976 START TEST event_reactor 00:04:39.976 ************************************ 00:04:39.976 02:20:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:39.976 [2024-11-28 02:20:13.602846] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:39.976 [2024-11-28 02:20:13.602960] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58066 ] 00:04:40.248 [2024-11-28 02:20:13.774343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.248 [2024-11-28 02:20:13.883448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.646 test_start 00:04:41.646 oneshot 00:04:41.646 tick 100 00:04:41.646 tick 100 00:04:41.646 tick 250 00:04:41.646 tick 100 00:04:41.646 tick 100 00:04:41.646 tick 250 00:04:41.646 tick 100 00:04:41.646 tick 500 00:04:41.646 tick 100 00:04:41.646 tick 100 00:04:41.646 tick 250 00:04:41.646 tick 100 00:04:41.646 tick 100 00:04:41.646 test_end 00:04:41.646 00:04:41.647 real 0m1.550s 00:04:41.647 user 0m1.356s 00:04:41.647 sys 0m0.087s 00:04:41.647 02:20:15 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.647 02:20:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:41.647 ************************************ 00:04:41.647 END TEST event_reactor 00:04:41.647 ************************************ 00:04:41.647 02:20:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.647 02:20:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:41.647 02:20:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.647 02:20:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.647 ************************************ 00:04:41.647 START TEST event_reactor_perf 00:04:41.647 ************************************ 00:04:41.647 02:20:15 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.647 [2024-11-28 02:20:15.214476] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:41.647 [2024-11-28 02:20:15.214569] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58102 ] 00:04:41.905 [2024-11-28 02:20:15.385676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.905 [2024-11-28 02:20:15.499659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.288 test_start 00:04:43.288 test_end 00:04:43.288 Performance: 394604 events per second 00:04:43.288 00:04:43.288 real 0m1.552s 00:04:43.288 user 0m1.355s 00:04:43.288 sys 0m0.089s 00:04:43.288 ************************************ 00:04:43.288 END TEST event_reactor_perf 00:04:43.288 ************************************ 00:04:43.288 02:20:16 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.288 02:20:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:43.288 02:20:16 event -- event/event.sh@49 -- # uname -s 00:04:43.288 02:20:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:43.288 02:20:16 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:43.288 02:20:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.288 02:20:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.288 02:20:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.288 ************************************ 00:04:43.288 START TEST event_scheduler 00:04:43.288 ************************************ 00:04:43.288 02:20:16 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:43.288 * Looking for test storage... 00:04:43.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:43.288 02:20:16 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.288 02:20:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.288 02:20:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.549 02:20:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.549 02:20:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.549 02:20:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:43.549 02:20:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.549 02:20:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.549 --rc genhtml_branch_coverage=1 00:04:43.549 --rc genhtml_function_coverage=1 00:04:43.549 --rc genhtml_legend=1 00:04:43.549 --rc geninfo_all_blocks=1 00:04:43.549 --rc geninfo_unexecuted_blocks=1 00:04:43.549 00:04:43.549 ' 00:04:43.549 02:20:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.549 --rc genhtml_branch_coverage=1 00:04:43.549 --rc genhtml_function_coverage=1 00:04:43.549 --rc genhtml_legend=1 00:04:43.549 --rc geninfo_all_blocks=1 00:04:43.549 --rc geninfo_unexecuted_blocks=1 00:04:43.549 00:04:43.549 ' 00:04:43.549 02:20:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.549 --rc genhtml_branch_coverage=1 00:04:43.549 --rc genhtml_function_coverage=1 00:04:43.549 --rc genhtml_legend=1 00:04:43.549 --rc geninfo_all_blocks=1 00:04:43.549 --rc geninfo_unexecuted_blocks=1 00:04:43.549 00:04:43.549 ' 00:04:43.549 02:20:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.549 --rc genhtml_branch_coverage=1 00:04:43.549 --rc genhtml_function_coverage=1 00:04:43.549 --rc genhtml_legend=1 00:04:43.549 --rc geninfo_all_blocks=1 00:04:43.549 --rc geninfo_unexecuted_blocks=1 00:04:43.549 00:04:43.549 ' 00:04:43.549 02:20:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:43.549 02:20:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58173 00:04:43.549 02:20:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:43.549 02:20:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.549 02:20:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58173 00:04:43.550 02:20:17 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58173 ']' 00:04:43.550 02:20:17 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.550 02:20:17 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.550 02:20:17 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.550 02:20:17 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.550 02:20:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.550 [2024-11-28 02:20:17.102294] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:43.550 [2024-11-28 02:20:17.102462] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58173 ] 00:04:43.810 [2024-11-28 02:20:17.275035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.810 [2024-11-28 02:20:17.391530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.810 [2024-11-28 02:20:17.391732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.810 [2024-11-28 02:20:17.391778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.810 [2024-11-28 02:20:17.391824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.380 02:20:17 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.380 02:20:17 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:44.380 02:20:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:44.380 02:20:17 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.380 02:20:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.380 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.380 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.380 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.380 POWER: Cannot set governor of lcore 0 to performance 00:04:44.380 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.380 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.380 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.380 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.380 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:44.380 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:44.380 POWER: Unable to set Power Management Environment for lcore 0 00:04:44.380 [2024-11-28 02:20:17.940547] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:44.380 [2024-11-28 02:20:17.940571] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:44.380 [2024-11-28 02:20:17.940582] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:44.380 [2024-11-28 02:20:17.940608] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:44.381 [2024-11-28 02:20:17.940617] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:44.381 [2024-11-28 02:20:17.940627] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:44.381 02:20:17 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.381 02:20:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:44.381 02:20:17 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.381 02:20:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.641 [2024-11-28 02:20:18.278237] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:44.641 02:20:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.641 02:20:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:44.641 02:20:18 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.641 02:20:18 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.641 02:20:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.641 ************************************ 00:04:44.641 START TEST scheduler_create_thread 00:04:44.641 ************************************ 00:04:44.641 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:44.641 02:20:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:44.641 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.641 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.641 2 00:04:44.641 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.641 02:20:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:44.641 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.641 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.901 3 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.901 4 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.901 5 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.901 6 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.901 7 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.901 8 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.901 9 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.901 10 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.901 02:20:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.281 02:20:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.282 02:20:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:46.282 02:20:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:46.282 02:20:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.282 02:20:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.220 02:20:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.220 02:20:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:47.220 02:20:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.220 02:20:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.805 02:20:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.805 02:20:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:47.805 02:20:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:47.806 02:20:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.806 02:20:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.745 ************************************ 00:04:48.745 END TEST scheduler_create_thread 00:04:48.745 ************************************ 00:04:48.745 02:20:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.745 00:04:48.745 real 0m3.884s 00:04:48.745 user 0m0.024s 00:04:48.745 sys 0m0.011s 00:04:48.745 02:20:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.745 02:20:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.745 02:20:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:48.745 02:20:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58173 00:04:48.745 02:20:22 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58173 ']' 00:04:48.745 02:20:22 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58173 00:04:48.745 02:20:22 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:48.745 02:20:22 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.745 02:20:22 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58173 00:04:48.745 killing process with pid 58173 00:04:48.745 02:20:22 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:48.745 02:20:22 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:48.745 02:20:22 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58173' 00:04:48.745 02:20:22 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58173 00:04:48.745 02:20:22 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58173 00:04:49.004 [2024-11-28 02:20:22.555890] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:50.386 00:04:50.386 real 0m6.916s 00:04:50.386 user 0m14.266s 00:04:50.386 sys 0m0.539s 00:04:50.386 ************************************ 00:04:50.386 END TEST event_scheduler 00:04:50.386 ************************************ 00:04:50.386 02:20:23 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.386 02:20:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.386 02:20:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:50.386 02:20:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:50.386 02:20:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.386 02:20:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.386 02:20:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.386 ************************************ 00:04:50.386 START TEST app_repeat 00:04:50.386 ************************************ 00:04:50.386 02:20:23 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58301 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58301' 00:04:50.386 Process app_repeat pid: 58301 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:50.386 spdk_app_start Round 0 00:04:50.386 02:20:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58301 /var/tmp/spdk-nbd.sock 00:04:50.386 02:20:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:04:50.386 02:20:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.386 02:20:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.387 02:20:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.387 02:20:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.387 02:20:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.387 [2024-11-28 02:20:23.844758] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:50.387 [2024-11-28 02:20:23.844949] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58301 ] 00:04:50.387 [2024-11-28 02:20:24.023668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.646 [2024-11-28 02:20:24.161406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.646 [2024-11-28 02:20:24.161442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.216 02:20:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.216 02:20:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:51.216 02:20:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.477 Malloc0 00:04:51.477 02:20:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.736 Malloc1 00:04:51.736 02:20:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.736 02:20:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.995 /dev/nbd0 00:04:51.995 02:20:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.995 02:20:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.995 1+0 records in 00:04:51.995 1+0 records out 00:04:51.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351186 s, 11.7 MB/s 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.995 02:20:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.995 02:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.995 02:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.995 02:20:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.255 /dev/nbd1 00:04:52.255 02:20:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.255 02:20:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.255 1+0 records in 00:04:52.255 1+0 records out 00:04:52.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323912 s, 12.6 MB/s 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.255 02:20:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.255 02:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.255 02:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.255 02:20:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.255 02:20:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.255 02:20:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.515 02:20:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.515 { 00:04:52.515 "nbd_device": "/dev/nbd0", 00:04:52.515 "bdev_name": "Malloc0" 00:04:52.515 }, 00:04:52.515 { 00:04:52.515 "nbd_device": "/dev/nbd1", 00:04:52.515 "bdev_name": "Malloc1" 00:04:52.515 } 00:04:52.515 ]' 00:04:52.515 02:20:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.515 02:20:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.516 { 00:04:52.516 "nbd_device": "/dev/nbd0", 00:04:52.516 "bdev_name": "Malloc0" 00:04:52.516 }, 00:04:52.516 { 00:04:52.516 "nbd_device": "/dev/nbd1", 00:04:52.516 "bdev_name": "Malloc1" 00:04:52.516 } 00:04:52.516 ]' 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.516 /dev/nbd1' 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.516 /dev/nbd1' 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.516 256+0 records in 00:04:52.516 256+0 records out 00:04:52.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00588938 s, 178 MB/s 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.516 256+0 records in 00:04:52.516 256+0 records out 00:04:52.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256703 s, 40.8 MB/s 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.516 256+0 records in 00:04:52.516 256+0 records out 00:04:52.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247571 s, 42.4 MB/s 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.516 02:20:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.776 02:20:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.776 02:20:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.776 02:20:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.776 02:20:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.776 02:20:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.776 02:20:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.776 02:20:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.776 02:20:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.776 02:20:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.776 02:20:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.035 02:20:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.294 02:20:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.294 02:20:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.553 02:20:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.934 [2024-11-28 02:20:28.341029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.934 [2024-11-28 02:20:28.450556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.934 [2024-11-28 02:20:28.450558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.193 [2024-11-28 02:20:28.635723] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.193 [2024-11-28 02:20:28.635813] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.573 spdk_app_start Round 1 00:04:56.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.573 02:20:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.573 02:20:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:56.573 02:20:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58301 /var/tmp/spdk-nbd.sock 00:04:56.573 02:20:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:04:56.573 02:20:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.573 02:20:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.573 02:20:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.573 02:20:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.573 02:20:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.833 02:20:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.833 02:20:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:56.833 02:20:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.093 Malloc0 00:04:57.093 02:20:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.352 Malloc1 00:04:57.352 02:20:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.352 02:20:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.615 /dev/nbd0 00:04:57.615 02:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.615 02:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.615 1+0 records in 00:04:57.615 1+0 records out 00:04:57.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316592 s, 12.9 MB/s 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.615 02:20:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.615 02:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.615 02:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.615 02:20:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.886 /dev/nbd1 00:04:57.886 02:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.886 02:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.886 1+0 records in 00:04:57.886 1+0 records out 00:04:57.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354796 s, 11.5 MB/s 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.886 02:20:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.886 02:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.886 02:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.886 02:20:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.886 02:20:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.886 02:20:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.172 02:20:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.172 { 00:04:58.172 "nbd_device": "/dev/nbd0", 00:04:58.172 "bdev_name": "Malloc0" 00:04:58.172 }, 00:04:58.172 { 00:04:58.172 "nbd_device": "/dev/nbd1", 00:04:58.172 "bdev_name": "Malloc1" 00:04:58.172 } 00:04:58.172 ]' 00:04:58.172 02:20:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.172 { 00:04:58.172 "nbd_device": "/dev/nbd0", 00:04:58.173 "bdev_name": "Malloc0" 00:04:58.173 }, 00:04:58.173 { 00:04:58.173 "nbd_device": "/dev/nbd1", 00:04:58.173 "bdev_name": "Malloc1" 00:04:58.173 } 00:04:58.173 ]' 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.173 /dev/nbd1' 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.173 /dev/nbd1' 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.173 256+0 records in 00:04:58.173 256+0 records out 00:04:58.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512282 s, 205 MB/s 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.173 256+0 records in 00:04:58.173 256+0 records out 00:04:58.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216866 s, 48.4 MB/s 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.173 256+0 records in 00:04:58.173 256+0 records out 00:04:58.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026377 s, 39.8 MB/s 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.173 02:20:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.464 02:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.464 02:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.464 02:20:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.464 02:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.464 02:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.464 02:20:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.464 02:20:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.464 02:20:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.464 02:20:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.464 02:20:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.725 02:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.985 02:20:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.985 02:20:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.245 02:20:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.625 [2024-11-28 02:20:34.036572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.625 [2024-11-28 02:20:34.146328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.625 [2024-11-28 02:20:34.146352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.884 [2024-11-28 02:20:34.337976] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.884 [2024-11-28 02:20:34.338043] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.266 spdk_app_start Round 2 00:05:02.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.266 02:20:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.266 02:20:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:02.266 02:20:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58301 /var/tmp/spdk-nbd.sock 00:05:02.266 02:20:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:05:02.266 02:20:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.266 02:20:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.266 02:20:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.266 02:20:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.266 02:20:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.526 02:20:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.526 02:20:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.526 02:20:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.785 Malloc0 00:05:02.785 02:20:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.045 Malloc1 00:05:03.045 02:20:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.045 02:20:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.306 /dev/nbd0 00:05:03.306 02:20:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.306 02:20:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.306 1+0 records in 00:05:03.306 1+0 records out 00:05:03.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316671 s, 12.9 MB/s 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.306 02:20:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.306 02:20:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.306 02:20:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.306 02:20:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.567 /dev/nbd1 00:05:03.567 02:20:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.567 02:20:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.567 1+0 records in 00:05:03.567 1+0 records out 00:05:03.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339528 s, 12.1 MB/s 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.567 02:20:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.567 02:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.567 02:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.567 02:20:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.567 02:20:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.567 02:20:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.827 { 00:05:03.827 "nbd_device": "/dev/nbd0", 00:05:03.827 "bdev_name": "Malloc0" 00:05:03.827 }, 00:05:03.827 { 00:05:03.827 "nbd_device": "/dev/nbd1", 00:05:03.827 "bdev_name": "Malloc1" 00:05:03.827 } 00:05:03.827 ]' 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.827 { 00:05:03.827 "nbd_device": "/dev/nbd0", 00:05:03.827 "bdev_name": "Malloc0" 00:05:03.827 }, 00:05:03.827 { 00:05:03.827 "nbd_device": "/dev/nbd1", 00:05:03.827 "bdev_name": "Malloc1" 00:05:03.827 } 00:05:03.827 ]' 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.827 /dev/nbd1' 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.827 /dev/nbd1' 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.827 256+0 records in 00:05:03.827 256+0 records out 00:05:03.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137207 s, 76.4 MB/s 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.827 256+0 records in 00:05:03.827 256+0 records out 00:05:03.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213804 s, 49.0 MB/s 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.827 256+0 records in 00:05:03.827 256+0 records out 00:05:03.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251729 s, 41.7 MB/s 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.827 02:20:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.828 02:20:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.828 02:20:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.828 02:20:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.828 02:20:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.828 02:20:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.828 02:20:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.087 02:20:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.087 02:20:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.087 02:20:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.087 02:20:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.087 02:20:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.087 02:20:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.087 02:20:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.087 02:20:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.087 02:20:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.087 02:20:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.346 02:20:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.605 02:20:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.605 02:20:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.864 02:20:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.246 [2024-11-28 02:20:39.667146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.246 [2024-11-28 02:20:39.773130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.246 [2024-11-28 02:20:39.773135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.506 [2024-11-28 02:20:39.968229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.506 [2024-11-28 02:20:39.968337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.888 02:20:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58301 /var/tmp/spdk-nbd.sock 00:05:07.888 02:20:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:05:07.888 02:20:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.888 02:20:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.888 02:20:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.888 02:20:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.888 02:20:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:08.148 02:20:41 event.app_repeat -- event/event.sh@39 -- # killprocess 58301 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58301 ']' 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58301 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58301 00:05:08.148 killing process with pid 58301 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58301' 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58301 00:05:08.148 02:20:41 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58301 00:05:09.088 spdk_app_start is called in Round 0. 00:05:09.088 Shutdown signal received, stop current app iteration 00:05:09.088 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:09.088 spdk_app_start is called in Round 1. 00:05:09.088 Shutdown signal received, stop current app iteration 00:05:09.088 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:09.088 spdk_app_start is called in Round 2. 00:05:09.088 Shutdown signal received, stop current app iteration 00:05:09.088 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:09.088 spdk_app_start is called in Round 3. 00:05:09.088 Shutdown signal received, stop current app iteration 00:05:09.349 02:20:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:09.349 02:20:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:09.349 00:05:09.349 real 0m19.011s 00:05:09.349 user 0m40.560s 00:05:09.349 sys 0m2.732s 00:05:09.349 02:20:42 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.349 02:20:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.349 ************************************ 00:05:09.349 END TEST app_repeat 00:05:09.349 ************************************ 00:05:09.349 02:20:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:09.349 02:20:42 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:09.349 02:20:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.349 02:20:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.349 02:20:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.349 ************************************ 00:05:09.349 START TEST cpu_locks 00:05:09.349 ************************************ 00:05:09.349 02:20:42 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:09.349 * Looking for test storage... 00:05:09.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:09.349 02:20:42 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.349 02:20:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.349 02:20:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.610 02:20:43 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.610 02:20:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:09.610 02:20:43 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.610 02:20:43 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.610 --rc genhtml_branch_coverage=1 00:05:09.610 --rc genhtml_function_coverage=1 00:05:09.610 --rc genhtml_legend=1 00:05:09.610 --rc geninfo_all_blocks=1 00:05:09.610 --rc geninfo_unexecuted_blocks=1 00:05:09.610 00:05:09.610 ' 00:05:09.610 02:20:43 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.610 --rc genhtml_branch_coverage=1 00:05:09.610 --rc genhtml_function_coverage=1 00:05:09.610 --rc genhtml_legend=1 00:05:09.610 --rc geninfo_all_blocks=1 00:05:09.610 --rc geninfo_unexecuted_blocks=1 00:05:09.610 00:05:09.610 ' 00:05:09.610 02:20:43 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.610 --rc genhtml_branch_coverage=1 00:05:09.610 --rc genhtml_function_coverage=1 00:05:09.610 --rc genhtml_legend=1 00:05:09.610 --rc geninfo_all_blocks=1 00:05:09.610 --rc geninfo_unexecuted_blocks=1 00:05:09.610 00:05:09.610 ' 00:05:09.610 02:20:43 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.610 --rc genhtml_branch_coverage=1 00:05:09.610 --rc genhtml_function_coverage=1 00:05:09.610 --rc genhtml_legend=1 00:05:09.610 --rc geninfo_all_blocks=1 00:05:09.610 --rc geninfo_unexecuted_blocks=1 00:05:09.610 00:05:09.610 ' 00:05:09.610 02:20:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:09.610 02:20:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:09.610 02:20:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:09.610 02:20:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:09.610 02:20:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.610 02:20:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.610 02:20:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.610 ************************************ 00:05:09.610 START TEST default_locks 00:05:09.610 ************************************ 00:05:09.610 02:20:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:09.610 02:20:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58742 00:05:09.610 02:20:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.610 02:20:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58742 00:05:09.610 02:20:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58742 ']' 00:05:09.610 02:20:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.610 02:20:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.611 02:20:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.611 02:20:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.611 02:20:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.611 [2024-11-28 02:20:43.180195] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:09.611 [2024-11-28 02:20:43.180333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58742 ] 00:05:09.871 [2024-11-28 02:20:43.353603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.871 [2024-11-28 02:20:43.464897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.813 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.813 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:10.813 02:20:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58742 00:05:10.813 02:20:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58742 00:05:10.813 02:20:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58742 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58742 ']' 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58742 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58742 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.115 killing process with pid 58742 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58742' 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58742 00:05:11.115 02:20:44 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58742 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58742 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58742 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58742 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58742 ']' 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.649 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58742) - No such process 00:05:13.649 ERROR: process (pid: 58742) is no longer running 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.649 00:05:13.649 real 0m3.893s 00:05:13.649 user 0m3.819s 00:05:13.649 sys 0m0.631s 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.649 02:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.649 ************************************ 00:05:13.649 END TEST default_locks 00:05:13.649 ************************************ 00:05:13.649 02:20:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:13.649 02:20:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.649 02:20:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.649 02:20:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.649 ************************************ 00:05:13.649 START TEST default_locks_via_rpc 00:05:13.649 ************************************ 00:05:13.649 02:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:13.649 02:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58812 00:05:13.649 02:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58812 00:05:13.649 02:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58812 ']' 00:05:13.649 02:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.649 02:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.649 02:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.649 02:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.649 02:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.649 02:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.649 [2024-11-28 02:20:47.148196] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:13.649 [2024-11-28 02:20:47.148343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58812 ] 00:05:13.649 [2024-11-28 02:20:47.322280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.908 [2024-11-28 02:20:47.424504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58812 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58812 00:05:14.846 02:20:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58812 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58812 ']' 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58812 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58812 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.105 killing process with pid 58812 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58812' 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58812 00:05:15.105 02:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58812 00:05:17.642 00:05:17.642 real 0m4.052s 00:05:17.642 user 0m3.977s 00:05:17.642 sys 0m0.686s 00:05:17.642 02:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.642 02:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.642 ************************************ 00:05:17.642 END TEST default_locks_via_rpc 00:05:17.642 ************************************ 00:05:17.642 02:20:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:17.642 02:20:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.642 02:20:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.642 02:20:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.642 ************************************ 00:05:17.642 START TEST non_locking_app_on_locked_coremask 00:05:17.642 ************************************ 00:05:17.642 02:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:17.642 02:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58886 00:05:17.642 02:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.642 02:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58886 /var/tmp/spdk.sock 00:05:17.642 02:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58886 ']' 00:05:17.642 02:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.642 02:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.642 02:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.642 02:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.642 02:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.642 [2024-11-28 02:20:51.257006] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:17.642 [2024-11-28 02:20:51.257139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58886 ] 00:05:17.901 [2024-11-28 02:20:51.431527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.901 [2024-11-28 02:20:51.539215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.839 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.840 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:18.840 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:18.840 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58907 00:05:18.840 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58907 /var/tmp/spdk2.sock 00:05:18.840 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58907 ']' 00:05:18.840 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.840 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.840 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.840 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.840 02:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.840 [2024-11-28 02:20:52.417478] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:18.840 [2024-11-28 02:20:52.417614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58907 ] 00:05:19.099 [2024-11-28 02:20:52.587240] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.099 [2024-11-28 02:20:52.587295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.359 [2024-11-28 02:20:52.820431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.897 02:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.897 02:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.897 02:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58886 00:05:21.897 02:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58886 00:05:21.897 02:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58886 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58886 ']' 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58886 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58886 00:05:21.897 killing process with pid 58886 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58886' 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58886 00:05:21.897 02:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58886 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58907 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58907 ']' 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58907 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58907 00:05:27.196 killing process with pid 58907 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58907' 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58907 00:05:27.196 02:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58907 00:05:29.104 00:05:29.104 real 0m11.286s 00:05:29.104 user 0m11.504s 00:05:29.104 sys 0m1.241s 00:05:29.104 02:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.104 02:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.104 ************************************ 00:05:29.104 END TEST non_locking_app_on_locked_coremask 00:05:29.104 ************************************ 00:05:29.104 02:21:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:29.104 02:21:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.104 02:21:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.104 02:21:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.104 ************************************ 00:05:29.104 START TEST locking_app_on_unlocked_coremask 00:05:29.104 ************************************ 00:05:29.104 02:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:29.104 02:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59046 00:05:29.104 02:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:29.104 02:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59046 /var/tmp/spdk.sock 00:05:29.104 02:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59046 ']' 00:05:29.104 02:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.104 02:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.104 02:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.104 02:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.104 02:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.104 [2024-11-28 02:21:02.613807] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:29.104 [2024-11-28 02:21:02.614017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59046 ] 00:05:29.363 [2024-11-28 02:21:02.786228] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.363 [2024-11-28 02:21:02.786371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.363 [2024-11-28 02:21:02.893197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59071 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59071 /var/tmp/spdk2.sock 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59071 ']' 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.299 02:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.299 [2024-11-28 02:21:03.744120] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:30.299 [2024-11-28 02:21:03.744320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59071 ] 00:05:30.300 [2024-11-28 02:21:03.910457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.559 [2024-11-28 02:21:04.122890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59071 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59071 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59046 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59046 ']' 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59046 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59046 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.110 killing process with pid 59046 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59046' 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59046 00:05:33.110 02:21:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59046 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59071 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59071 ']' 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59071 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59071 00:05:38.428 killing process with pid 59071 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59071' 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59071 00:05:38.428 02:21:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59071 00:05:40.341 ************************************ 00:05:40.341 END TEST locking_app_on_unlocked_coremask 00:05:40.341 ************************************ 00:05:40.341 00:05:40.341 real 0m11.000s 00:05:40.341 user 0m11.198s 00:05:40.341 sys 0m1.130s 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.341 02:21:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:40.341 02:21:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.341 02:21:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.341 02:21:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.341 ************************************ 00:05:40.341 START TEST locking_app_on_locked_coremask 00:05:40.341 ************************************ 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59212 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59212 /var/tmp/spdk.sock 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59212 ']' 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.341 02:21:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.341 [2024-11-28 02:21:13.678596] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:40.341 [2024-11-28 02:21:13.678710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59212 ] 00:05:40.341 [2024-11-28 02:21:13.850689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.341 [2024-11-28 02:21:13.956734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59228 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59228 /var/tmp/spdk2.sock 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59228 /var/tmp/spdk2.sock 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59228 /var/tmp/spdk2.sock 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59228 ']' 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.282 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.283 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.283 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.283 02:21:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.283 [2024-11-28 02:21:14.835095] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:41.283 [2024-11-28 02:21:14.835292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59228 ] 00:05:41.542 [2024-11-28 02:21:15.000967] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59212 has claimed it. 00:05:41.542 [2024-11-28 02:21:15.001025] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.802 ERROR: process (pid: 59228) is no longer running 00:05:41.802 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59228) - No such process 00:05:41.802 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.802 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:41.802 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:41.802 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.802 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.802 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.802 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59212 00:05:41.802 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59212 00:05:41.802 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59212 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59212 ']' 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59212 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59212 00:05:42.372 killing process with pid 59212 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59212' 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59212 00:05:42.372 02:21:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59212 00:05:44.915 00:05:44.915 real 0m4.656s 00:05:44.915 user 0m4.786s 00:05:44.915 sys 0m0.789s 00:05:44.915 02:21:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.915 02:21:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.915 ************************************ 00:05:44.915 END TEST locking_app_on_locked_coremask 00:05:44.915 ************************************ 00:05:44.915 02:21:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:44.915 02:21:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.915 02:21:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.915 02:21:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.915 ************************************ 00:05:44.915 START TEST locking_overlapped_coremask 00:05:44.915 ************************************ 00:05:44.915 02:21:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:44.915 02:21:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59298 00:05:44.915 02:21:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:44.915 02:21:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59298 /var/tmp/spdk.sock 00:05:44.915 02:21:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59298 ']' 00:05:44.915 02:21:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.915 02:21:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.915 02:21:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.915 02:21:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.915 02:21:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.915 [2024-11-28 02:21:18.400144] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:44.915 [2024-11-28 02:21:18.400353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59298 ] 00:05:44.915 [2024-11-28 02:21:18.577155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.174 [2024-11-28 02:21:18.695186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.174 [2024-11-28 02:21:18.695280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.174 [2024-11-28 02:21:18.695321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59323 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59323 /var/tmp/spdk2.sock 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59323 /var/tmp/spdk2.sock 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.113 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59323 /var/tmp/spdk2.sock 00:05:46.114 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59323 ']' 00:05:46.114 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.114 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.114 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.114 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.114 02:21:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.114 [2024-11-28 02:21:19.674753] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:46.114 [2024-11-28 02:21:19.675007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59323 ] 00:05:46.374 [2024-11-28 02:21:19.845680] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59298 has claimed it. 00:05:46.374 [2024-11-28 02:21:19.845764] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:46.634 ERROR: process (pid: 59323) is no longer running 00:05:46.634 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59323) - No such process 00:05:46.634 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.634 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:46.634 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:46.634 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.634 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.634 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.634 02:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:46.634 02:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.635 02:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.635 02:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.635 02:21:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59298 00:05:46.635 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59298 ']' 00:05:46.635 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59298 00:05:46.895 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:46.895 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.895 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59298 00:05:46.895 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.895 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.895 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59298' 00:05:46.895 killing process with pid 59298 00:05:46.895 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59298 00:05:46.895 02:21:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59298 00:05:49.435 00:05:49.435 real 0m4.456s 00:05:49.435 user 0m12.150s 00:05:49.435 sys 0m0.544s 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.435 ************************************ 00:05:49.435 END TEST locking_overlapped_coremask 00:05:49.435 ************************************ 00:05:49.435 02:21:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:49.435 02:21:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.435 02:21:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.435 02:21:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.435 ************************************ 00:05:49.435 START TEST locking_overlapped_coremask_via_rpc 00:05:49.435 ************************************ 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59387 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59387 /var/tmp/spdk.sock 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59387 ']' 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.435 02:21:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.435 [2024-11-28 02:21:22.910290] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:49.435 [2024-11-28 02:21:22.910418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59387 ] 00:05:49.435 [2024-11-28 02:21:23.085646] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.435 [2024-11-28 02:21:23.085704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.693 [2024-11-28 02:21:23.195065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.693 [2024-11-28 02:21:23.195164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.693 [2024-11-28 02:21:23.195201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59405 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59405 /var/tmp/spdk2.sock 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59405 ']' 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.644 02:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.644 [2024-11-28 02:21:24.125111] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:50.644 [2024-11-28 02:21:24.125316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59405 ] 00:05:50.644 [2024-11-28 02:21:24.293917] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.644 [2024-11-28 02:21:24.293992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.915 [2024-11-28 02:21:24.530843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.915 [2024-11-28 02:21:24.534118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.915 [2024-11-28 02:21:24.534156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.452 [2024-11-28 02:21:26.736123] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59387 has claimed it. 00:05:53.452 request: 00:05:53.452 { 00:05:53.452 "method": "framework_enable_cpumask_locks", 00:05:53.452 "req_id": 1 00:05:53.452 } 00:05:53.452 Got JSON-RPC error response 00:05:53.452 response: 00:05:53.452 { 00:05:53.452 "code": -32603, 00:05:53.452 "message": "Failed to claim CPU core: 2" 00:05:53.452 } 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59387 /var/tmp/spdk.sock 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59387 ']' 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59405 /var/tmp/spdk2.sock 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59405 ']' 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.452 02:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.712 02:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.712 02:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.712 02:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:53.712 02:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.712 02:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.712 02:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.712 00:05:53.712 real 0m4.358s 00:05:53.712 user 0m1.313s 00:05:53.712 sys 0m0.180s 00:05:53.712 02:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.712 02:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.712 ************************************ 00:05:53.712 END TEST locking_overlapped_coremask_via_rpc 00:05:53.712 ************************************ 00:05:53.712 02:21:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:53.712 02:21:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59387 ]] 00:05:53.712 02:21:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59387 00:05:53.712 02:21:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59387 ']' 00:05:53.713 02:21:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59387 00:05:53.713 02:21:27 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:53.713 02:21:27 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.713 02:21:27 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59387 00:05:53.713 02:21:27 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.713 02:21:27 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.713 killing process with pid 59387 00:05:53.713 02:21:27 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59387' 00:05:53.713 02:21:27 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59387 00:05:53.713 02:21:27 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59387 00:05:56.254 02:21:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59405 ]] 00:05:56.254 02:21:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59405 00:05:56.254 02:21:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59405 ']' 00:05:56.254 02:21:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59405 00:05:56.254 02:21:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:56.254 02:21:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.254 02:21:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59405 00:05:56.254 killing process with pid 59405 00:05:56.254 02:21:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:56.254 02:21:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:56.255 02:21:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59405' 00:05:56.255 02:21:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59405 00:05:56.255 02:21:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59405 00:05:58.796 02:21:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.796 Process with pid 59387 is not found 00:05:58.796 02:21:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:58.796 02:21:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59387 ]] 00:05:58.796 02:21:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59387 00:05:58.796 02:21:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59387 ']' 00:05:58.796 02:21:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59387 00:05:58.796 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59387) - No such process 00:05:58.796 02:21:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59387 is not found' 00:05:58.796 02:21:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59405 ]] 00:05:58.796 02:21:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59405 00:05:58.796 02:21:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59405 ']' 00:05:58.796 02:21:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59405 00:05:58.796 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59405) - No such process 00:05:58.796 02:21:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59405 is not found' 00:05:58.796 Process with pid 59405 is not found 00:05:58.796 02:21:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.796 00:05:58.796 real 0m49.232s 00:05:58.796 user 1m24.742s 00:05:58.796 sys 0m6.384s 00:05:58.796 02:21:32 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.796 02:21:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.796 ************************************ 00:05:58.796 END TEST cpu_locks 00:05:58.796 ************************************ 00:05:58.796 00:05:58.796 real 1m20.458s 00:05:58.796 user 2m26.858s 00:05:58.796 sys 0m10.348s 00:05:58.796 02:21:32 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.796 02:21:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.796 ************************************ 00:05:58.796 END TEST event 00:05:58.796 ************************************ 00:05:58.796 02:21:32 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.796 02:21:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.796 02:21:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.796 02:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:58.796 ************************************ 00:05:58.796 START TEST thread 00:05:58.796 ************************************ 00:05:58.796 02:21:32 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.796 * Looking for test storage... 00:05:58.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:58.796 02:21:32 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.796 02:21:32 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.796 02:21:32 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.796 02:21:32 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.797 02:21:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.797 02:21:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.797 02:21:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.797 02:21:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.797 02:21:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.797 02:21:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.797 02:21:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.797 02:21:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.797 02:21:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.797 02:21:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.797 02:21:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.797 02:21:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:58.797 02:21:32 thread -- scripts/common.sh@345 -- # : 1 00:05:58.797 02:21:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.797 02:21:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.797 02:21:32 thread -- scripts/common.sh@365 -- # decimal 1 00:05:58.797 02:21:32 thread -- scripts/common.sh@353 -- # local d=1 00:05:58.797 02:21:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.797 02:21:32 thread -- scripts/common.sh@355 -- # echo 1 00:05:58.797 02:21:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.797 02:21:32 thread -- scripts/common.sh@366 -- # decimal 2 00:05:58.797 02:21:32 thread -- scripts/common.sh@353 -- # local d=2 00:05:58.797 02:21:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.797 02:21:32 thread -- scripts/common.sh@355 -- # echo 2 00:05:58.797 02:21:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.797 02:21:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.797 02:21:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.797 02:21:32 thread -- scripts/common.sh@368 -- # return 0 00:05:58.797 02:21:32 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.797 02:21:32 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.797 --rc genhtml_branch_coverage=1 00:05:58.797 --rc genhtml_function_coverage=1 00:05:58.797 --rc genhtml_legend=1 00:05:58.797 --rc geninfo_all_blocks=1 00:05:58.797 --rc geninfo_unexecuted_blocks=1 00:05:58.797 00:05:58.797 ' 00:05:58.797 02:21:32 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.797 --rc genhtml_branch_coverage=1 00:05:58.797 --rc genhtml_function_coverage=1 00:05:58.797 --rc genhtml_legend=1 00:05:58.797 --rc geninfo_all_blocks=1 00:05:58.797 --rc geninfo_unexecuted_blocks=1 00:05:58.797 00:05:58.797 ' 00:05:58.797 02:21:32 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.797 --rc genhtml_branch_coverage=1 00:05:58.797 --rc genhtml_function_coverage=1 00:05:58.797 --rc genhtml_legend=1 00:05:58.797 --rc geninfo_all_blocks=1 00:05:58.797 --rc geninfo_unexecuted_blocks=1 00:05:58.797 00:05:58.797 ' 00:05:58.797 02:21:32 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.797 --rc genhtml_branch_coverage=1 00:05:58.797 --rc genhtml_function_coverage=1 00:05:58.797 --rc genhtml_legend=1 00:05:58.797 --rc geninfo_all_blocks=1 00:05:58.797 --rc geninfo_unexecuted_blocks=1 00:05:58.797 00:05:58.797 ' 00:05:58.797 02:21:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.797 02:21:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:58.797 02:21:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.797 02:21:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.797 ************************************ 00:05:58.797 START TEST thread_poller_perf 00:05:58.797 ************************************ 00:05:58.797 02:21:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.057 [2024-11-28 02:21:32.483789] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:59.057 [2024-11-28 02:21:32.483884] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59600 ] 00:05:59.057 [2024-11-28 02:21:32.655244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.316 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:59.316 [2024-11-28 02:21:32.757616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.699 [2024-11-28T02:21:34.378Z] ====================================== 00:06:00.699 [2024-11-28T02:21:34.378Z] busy:2297737368 (cyc) 00:06:00.699 [2024-11-28T02:21:34.378Z] total_run_count: 411000 00:06:00.699 [2024-11-28T02:21:34.378Z] tsc_hz: 2290000000 (cyc) 00:06:00.699 [2024-11-28T02:21:34.378Z] ====================================== 00:06:00.699 [2024-11-28T02:21:34.378Z] poller_cost: 5590 (cyc), 2441 (nsec) 00:06:00.699 00:06:00.699 real 0m1.543s 00:06:00.699 user 0m1.354s 00:06:00.699 sys 0m0.083s 00:06:00.699 ************************************ 00:06:00.699 END TEST thread_poller_perf 00:06:00.699 ************************************ 00:06:00.699 02:21:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.699 02:21:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.699 02:21:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.699 02:21:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:00.699 02:21:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.699 02:21:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.699 ************************************ 00:06:00.699 START TEST thread_poller_perf 00:06:00.699 ************************************ 00:06:00.699 02:21:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.699 [2024-11-28 02:21:34.094789] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:00.699 [2024-11-28 02:21:34.094906] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59637 ] 00:06:00.699 [2024-11-28 02:21:34.265516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.699 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:00.699 [2024-11-28 02:21:34.372800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.078 [2024-11-28T02:21:35.757Z] ====================================== 00:06:02.078 [2024-11-28T02:21:35.757Z] busy:2293219202 (cyc) 00:06:02.078 [2024-11-28T02:21:35.757Z] total_run_count: 5510000 00:06:02.078 [2024-11-28T02:21:35.757Z] tsc_hz: 2290000000 (cyc) 00:06:02.078 [2024-11-28T02:21:35.757Z] ====================================== 00:06:02.078 [2024-11-28T02:21:35.757Z] poller_cost: 416 (cyc), 181 (nsec) 00:06:02.078 00:06:02.078 real 0m1.548s 00:06:02.078 user 0m1.341s 00:06:02.078 sys 0m0.101s 00:06:02.079 02:21:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.079 ************************************ 00:06:02.079 END TEST thread_poller_perf 00:06:02.079 ************************************ 00:06:02.079 02:21:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.079 02:21:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:02.079 ************************************ 00:06:02.079 END TEST thread 00:06:02.079 ************************************ 00:06:02.079 00:06:02.079 real 0m3.443s 00:06:02.079 user 0m2.862s 00:06:02.079 sys 0m0.374s 00:06:02.079 02:21:35 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.079 02:21:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.079 02:21:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:02.079 02:21:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:02.079 02:21:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.079 02:21:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.079 02:21:35 -- common/autotest_common.sh@10 -- # set +x 00:06:02.079 ************************************ 00:06:02.079 START TEST app_cmdline 00:06:02.079 ************************************ 00:06:02.079 02:21:35 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:02.338 * Looking for test storage... 00:06:02.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.338 02:21:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.338 --rc genhtml_branch_coverage=1 00:06:02.338 --rc genhtml_function_coverage=1 00:06:02.338 --rc genhtml_legend=1 00:06:02.338 --rc geninfo_all_blocks=1 00:06:02.338 --rc geninfo_unexecuted_blocks=1 00:06:02.338 00:06:02.338 ' 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.338 --rc genhtml_branch_coverage=1 00:06:02.338 --rc genhtml_function_coverage=1 00:06:02.338 --rc genhtml_legend=1 00:06:02.338 --rc geninfo_all_blocks=1 00:06:02.338 --rc geninfo_unexecuted_blocks=1 00:06:02.338 00:06:02.338 ' 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.338 --rc genhtml_branch_coverage=1 00:06:02.338 --rc genhtml_function_coverage=1 00:06:02.338 --rc genhtml_legend=1 00:06:02.338 --rc geninfo_all_blocks=1 00:06:02.338 --rc geninfo_unexecuted_blocks=1 00:06:02.338 00:06:02.338 ' 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.338 --rc genhtml_branch_coverage=1 00:06:02.338 --rc genhtml_function_coverage=1 00:06:02.338 --rc genhtml_legend=1 00:06:02.338 --rc geninfo_all_blocks=1 00:06:02.338 --rc geninfo_unexecuted_blocks=1 00:06:02.338 00:06:02.338 ' 00:06:02.338 02:21:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:02.338 02:21:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59726 00:06:02.338 02:21:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:02.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.338 02:21:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59726 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59726 ']' 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.338 02:21:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.597 [2024-11-28 02:21:36.034214] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:02.597 [2024-11-28 02:21:36.034439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59726 ] 00:06:02.597 [2024-11-28 02:21:36.205207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.857 [2024-11-28 02:21:36.312340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:03.846 { 00:06:03.846 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:06:03.846 "fields": { 00:06:03.846 "major": 25, 00:06:03.846 "minor": 1, 00:06:03.846 "patch": 0, 00:06:03.846 "suffix": "-pre", 00:06:03.846 "commit": "35cd3e84d" 00:06:03.846 } 00:06:03.846 } 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:03.846 02:21:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:03.846 02:21:37 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.105 request: 00:06:04.105 { 00:06:04.105 "method": "env_dpdk_get_mem_stats", 00:06:04.105 "req_id": 1 00:06:04.105 } 00:06:04.105 Got JSON-RPC error response 00:06:04.105 response: 00:06:04.105 { 00:06:04.105 "code": -32601, 00:06:04.105 "message": "Method not found" 00:06:04.105 } 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.105 02:21:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59726 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59726 ']' 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59726 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59726 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59726' 00:06:04.105 killing process with pid 59726 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@973 -- # kill 59726 00:06:04.105 02:21:37 app_cmdline -- common/autotest_common.sh@978 -- # wait 59726 00:06:06.646 00:06:06.646 real 0m4.188s 00:06:06.646 user 0m4.375s 00:06:06.646 sys 0m0.583s 00:06:06.646 02:21:39 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.646 02:21:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.646 ************************************ 00:06:06.646 END TEST app_cmdline 00:06:06.646 ************************************ 00:06:06.646 02:21:39 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:06.646 02:21:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.646 02:21:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.646 02:21:39 -- common/autotest_common.sh@10 -- # set +x 00:06:06.646 ************************************ 00:06:06.646 START TEST version 00:06:06.646 ************************************ 00:06:06.646 02:21:39 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:06.646 * Looking for test storage... 00:06:06.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:06.646 02:21:40 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.646 02:21:40 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.646 02:21:40 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.646 02:21:40 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.646 02:21:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.646 02:21:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.646 02:21:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.646 02:21:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.646 02:21:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.646 02:21:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.646 02:21:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.646 02:21:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.646 02:21:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.646 02:21:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.646 02:21:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.646 02:21:40 version -- scripts/common.sh@344 -- # case "$op" in 00:06:06.646 02:21:40 version -- scripts/common.sh@345 -- # : 1 00:06:06.646 02:21:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.646 02:21:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.646 02:21:40 version -- scripts/common.sh@365 -- # decimal 1 00:06:06.646 02:21:40 version -- scripts/common.sh@353 -- # local d=1 00:06:06.646 02:21:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.646 02:21:40 version -- scripts/common.sh@355 -- # echo 1 00:06:06.646 02:21:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.646 02:21:40 version -- scripts/common.sh@366 -- # decimal 2 00:06:06.646 02:21:40 version -- scripts/common.sh@353 -- # local d=2 00:06:06.646 02:21:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.646 02:21:40 version -- scripts/common.sh@355 -- # echo 2 00:06:06.646 02:21:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.646 02:21:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.646 02:21:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.646 02:21:40 version -- scripts/common.sh@368 -- # return 0 00:06:06.646 02:21:40 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.646 02:21:40 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.646 --rc genhtml_branch_coverage=1 00:06:06.646 --rc genhtml_function_coverage=1 00:06:06.646 --rc genhtml_legend=1 00:06:06.646 --rc geninfo_all_blocks=1 00:06:06.646 --rc geninfo_unexecuted_blocks=1 00:06:06.646 00:06:06.646 ' 00:06:06.646 02:21:40 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.646 --rc genhtml_branch_coverage=1 00:06:06.646 --rc genhtml_function_coverage=1 00:06:06.646 --rc genhtml_legend=1 00:06:06.646 --rc geninfo_all_blocks=1 00:06:06.646 --rc geninfo_unexecuted_blocks=1 00:06:06.646 00:06:06.646 ' 00:06:06.646 02:21:40 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.646 --rc genhtml_branch_coverage=1 00:06:06.646 --rc genhtml_function_coverage=1 00:06:06.646 --rc genhtml_legend=1 00:06:06.646 --rc geninfo_all_blocks=1 00:06:06.646 --rc geninfo_unexecuted_blocks=1 00:06:06.646 00:06:06.646 ' 00:06:06.646 02:21:40 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.646 --rc genhtml_branch_coverage=1 00:06:06.646 --rc genhtml_function_coverage=1 00:06:06.646 --rc genhtml_legend=1 00:06:06.646 --rc geninfo_all_blocks=1 00:06:06.646 --rc geninfo_unexecuted_blocks=1 00:06:06.646 00:06:06.646 ' 00:06:06.646 02:21:40 version -- app/version.sh@17 -- # get_header_version major 00:06:06.646 02:21:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.646 02:21:40 version -- app/version.sh@14 -- # cut -f2 00:06:06.646 02:21:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.646 02:21:40 version -- app/version.sh@17 -- # major=25 00:06:06.646 02:21:40 version -- app/version.sh@18 -- # get_header_version minor 00:06:06.646 02:21:40 version -- app/version.sh@14 -- # cut -f2 00:06:06.646 02:21:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.646 02:21:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.646 02:21:40 version -- app/version.sh@18 -- # minor=1 00:06:06.646 02:21:40 version -- app/version.sh@19 -- # get_header_version patch 00:06:06.646 02:21:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.646 02:21:40 version -- app/version.sh@14 -- # cut -f2 00:06:06.646 02:21:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.646 02:21:40 version -- app/version.sh@19 -- # patch=0 00:06:06.646 02:21:40 version -- app/version.sh@20 -- # get_header_version suffix 00:06:06.646 02:21:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.646 02:21:40 version -- app/version.sh@14 -- # cut -f2 00:06:06.646 02:21:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.646 02:21:40 version -- app/version.sh@20 -- # suffix=-pre 00:06:06.646 02:21:40 version -- app/version.sh@22 -- # version=25.1 00:06:06.646 02:21:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:06.646 02:21:40 version -- app/version.sh@28 -- # version=25.1rc0 00:06:06.647 02:21:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:06.647 02:21:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:06.647 02:21:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:06.647 02:21:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:06.647 ************************************ 00:06:06.647 END TEST version 00:06:06.647 ************************************ 00:06:06.647 00:06:06.647 real 0m0.312s 00:06:06.647 user 0m0.193s 00:06:06.647 sys 0m0.176s 00:06:06.647 02:21:40 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.647 02:21:40 version -- common/autotest_common.sh@10 -- # set +x 00:06:06.906 02:21:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:06.906 02:21:40 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:06.906 02:21:40 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:06.906 02:21:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.906 02:21:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.906 02:21:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.906 ************************************ 00:06:06.906 START TEST bdev_raid 00:06:06.906 ************************************ 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:06.906 * Looking for test storage... 00:06:06.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.906 02:21:40 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.906 --rc genhtml_branch_coverage=1 00:06:06.906 --rc genhtml_function_coverage=1 00:06:06.906 --rc genhtml_legend=1 00:06:06.906 --rc geninfo_all_blocks=1 00:06:06.906 --rc geninfo_unexecuted_blocks=1 00:06:06.906 00:06:06.906 ' 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.906 --rc genhtml_branch_coverage=1 00:06:06.906 --rc genhtml_function_coverage=1 00:06:06.906 --rc genhtml_legend=1 00:06:06.906 --rc geninfo_all_blocks=1 00:06:06.906 --rc geninfo_unexecuted_blocks=1 00:06:06.906 00:06:06.906 ' 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.906 --rc genhtml_branch_coverage=1 00:06:06.906 --rc genhtml_function_coverage=1 00:06:06.906 --rc genhtml_legend=1 00:06:06.906 --rc geninfo_all_blocks=1 00:06:06.906 --rc geninfo_unexecuted_blocks=1 00:06:06.906 00:06:06.906 ' 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.906 --rc genhtml_branch_coverage=1 00:06:06.906 --rc genhtml_function_coverage=1 00:06:06.906 --rc genhtml_legend=1 00:06:06.906 --rc geninfo_all_blocks=1 00:06:06.906 --rc geninfo_unexecuted_blocks=1 00:06:06.906 00:06:06.906 ' 00:06:06.906 02:21:40 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:06.906 02:21:40 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.906 02:21:40 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:06.906 02:21:40 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:06.906 02:21:40 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:06.906 02:21:40 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:06.906 02:21:40 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.906 02:21:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:07.166 ************************************ 00:06:07.166 START TEST raid1_resize_data_offset_test 00:06:07.166 ************************************ 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59908 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59908' 00:06:07.166 Process raid pid: 59908 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59908 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59908 ']' 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.166 02:21:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:07.166 [2024-11-28 02:21:40.669270] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:07.166 [2024-11-28 02:21:40.669472] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.166 [2024-11-28 02:21:40.820314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.426 [2024-11-28 02:21:40.933111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.686 [2024-11-28 02:21:41.129339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:07.686 [2024-11-28 02:21:41.129474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:07.946 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.946 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:07.946 02:21:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:07.946 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.946 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:07.946 malloc0 00:06:07.946 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.946 02:21:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:07.946 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.946 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.206 malloc1 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.206 null0 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.206 [2024-11-28 02:21:41.682946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:08.206 [2024-11-28 02:21:41.684679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:08.206 [2024-11-28 02:21:41.684742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:08.206 [2024-11-28 02:21:41.684895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:08.206 [2024-11-28 02:21:41.684910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:08.206 [2024-11-28 02:21:41.685183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:08.206 [2024-11-28 02:21:41.685372] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:08.206 [2024-11-28 02:21:41.685394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:08.206 [2024-11-28 02:21:41.685548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.206 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.207 [2024-11-28 02:21:41.742797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:08.207 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.207 02:21:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:08.207 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.207 02:21:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.780 malloc2 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.780 [2024-11-28 02:21:42.276989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:08.780 [2024-11-28 02:21:42.293888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.780 [2024-11-28 02:21:42.295777] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59908 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59908 ']' 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59908 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59908 00:06:08.780 killing process with pid 59908 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59908' 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59908 00:06:08.780 [2024-11-28 02:21:42.388547] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:08.780 02:21:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59908 00:06:08.780 [2024-11-28 02:21:42.389237] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:08.780 [2024-11-28 02:21:42.389319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:08.780 [2024-11-28 02:21:42.389342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:08.780 [2024-11-28 02:21:42.422448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:08.780 [2024-11-28 02:21:42.422748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:08.780 [2024-11-28 02:21:42.422765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:10.692 [2024-11-28 02:21:44.123794] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:11.629 ************************************ 00:06:11.629 END TEST raid1_resize_data_offset_test 00:06:11.629 ************************************ 00:06:11.629 02:21:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:11.629 00:06:11.629 real 0m4.630s 00:06:11.629 user 0m4.480s 00:06:11.629 sys 0m0.578s 00:06:11.629 02:21:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.629 02:21:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.629 02:21:45 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:11.629 02:21:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:11.629 02:21:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.629 02:21:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:11.629 ************************************ 00:06:11.629 START TEST raid0_resize_superblock_test 00:06:11.629 ************************************ 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59997 00:06:11.629 Process raid pid: 59997 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59997' 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59997 00:06:11.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59997 ']' 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.629 02:21:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.888 [2024-11-28 02:21:45.370403] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:11.888 [2024-11-28 02:21:45.370608] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:11.888 [2024-11-28 02:21:45.543332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.149 [2024-11-28 02:21:45.651892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.409 [2024-11-28 02:21:45.849619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:12.409 [2024-11-28 02:21:45.849649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:12.668 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.669 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:12.669 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:12.669 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.669 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.240 malloc0 00:06:13.240 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.240 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:13.240 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.240 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.240 [2024-11-28 02:21:46.736561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:13.240 [2024-11-28 02:21:46.736624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:13.240 [2024-11-28 02:21:46.736645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:13.240 [2024-11-28 02:21:46.736655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:13.240 [2024-11-28 02:21:46.738997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:13.240 [2024-11-28 02:21:46.739036] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:13.240 pt0 00:06:13.240 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.241 5e80e2c0-e43d-4e91-8bb7-5798fde72cfd 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.241 055628ff-eb6c-4bc8-be66-a13e6dec59ab 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.241 b8f6b48d-664a-4dc3-8d1d-a981bafe6398 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.241 [2024-11-28 02:21:46.869325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 055628ff-eb6c-4bc8-be66-a13e6dec59ab is claimed 00:06:13.241 [2024-11-28 02:21:46.869407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b8f6b48d-664a-4dc3-8d1d-a981bafe6398 is claimed 00:06:13.241 [2024-11-28 02:21:46.869527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:13.241 [2024-11-28 02:21:46.869542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:13.241 [2024-11-28 02:21:46.869808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:13.241 [2024-11-28 02:21:46.870016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:13.241 [2024-11-28 02:21:46.870028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:13.241 [2024-11-28 02:21:46.870171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.241 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.501 02:21:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.501 [2024-11-28 02:21:46.981360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:13.501 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.502 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:13.502 02:21:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 [2024-11-28 02:21:47.009279] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:13.502 [2024-11-28 02:21:47.009306] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '055628ff-eb6c-4bc8-be66-a13e6dec59ab' was resized: old size 131072, new size 204800 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 [2024-11-28 02:21:47.021214] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:13.502 [2024-11-28 02:21:47.021239] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b8f6b48d-664a-4dc3-8d1d-a981bafe6398' was resized: old size 131072, new size 204800 00:06:13.502 [2024-11-28 02:21:47.021269] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 [2024-11-28 02:21:47.133103] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.502 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.762 [2024-11-28 02:21:47.180802] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:13.763 [2024-11-28 02:21:47.180926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:13.763 [2024-11-28 02:21:47.180946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:13.763 [2024-11-28 02:21:47.180960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:13.763 [2024-11-28 02:21:47.181096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:13.763 [2024-11-28 02:21:47.181134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:13.763 [2024-11-28 02:21:47.181146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 [2024-11-28 02:21:47.192707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:13.763 [2024-11-28 02:21:47.192812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:13.763 [2024-11-28 02:21:47.192854] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:13.763 [2024-11-28 02:21:47.192884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:13.763 [2024-11-28 02:21:47.195226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:13.763 [2024-11-28 02:21:47.195305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:13.763 [2024-11-28 02:21:47.197235] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 055628ff-eb6c-4bc8-be66-a13e6dec59ab 00:06:13.763 [2024-11-28 02:21:47.197355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 055628ff-eb6c-4bc8-be66-a13e6dec59ab is claimed 00:06:13.763 [2024-11-28 02:21:47.197506] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b8f6b48d-664a-4dc3-8d1d-a981bafe6398 00:06:13.763 [2024-11-28 02:21:47.197584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b8f6b48d-664a-4dc3-8d1d-a981bafe6398 is claimed 00:06:13.763 [2024-11-28 02:21:47.197828] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b8f6b48d-664a-4dc3-8d1d-a981bafe6398 (2) smaller than existing raid bdev Raid (3) 00:06:13.763 [2024-11-28 02:21:47.197901] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 055628ff-eb6c-4bc8-be66-a13e6dec59ab: File exists 00:06:13.763 [2024-11-28 02:21:47.198021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:13.763 [2024-11-28 02:21:47.198060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:13.763 pt0 00:06:13.763 [2024-11-28 02:21:47.198359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:13.763 [2024-11-28 02:21:47.198562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:13.763 [2024-11-28 02:21:47.198603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:13.763 [2024-11-28 02:21:47.198791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 [2024-11-28 02:21:47.221470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59997 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59997 ']' 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59997 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59997 00:06:13.763 killing process with pid 59997 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59997' 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59997 00:06:13.763 [2024-11-28 02:21:47.286875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:13.763 [2024-11-28 02:21:47.286972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:13.763 [2024-11-28 02:21:47.287014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:13.763 [2024-11-28 02:21:47.287022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:13.763 02:21:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59997 00:06:15.144 [2024-11-28 02:21:48.641663] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:16.085 02:21:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:16.085 00:06:16.085 real 0m4.438s 00:06:16.085 user 0m4.627s 00:06:16.085 sys 0m0.556s 00:06:16.085 02:21:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.085 02:21:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.085 ************************************ 00:06:16.085 END TEST raid0_resize_superblock_test 00:06:16.085 ************************************ 00:06:16.361 02:21:49 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:16.361 02:21:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.361 02:21:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.361 02:21:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:16.361 ************************************ 00:06:16.361 START TEST raid1_resize_superblock_test 00:06:16.361 ************************************ 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60090 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60090' 00:06:16.361 Process raid pid: 60090 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60090 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60090 ']' 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.361 02:21:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.361 [2024-11-28 02:21:49.873027] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:16.361 [2024-11-28 02:21:49.873260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.626 [2024-11-28 02:21:50.047080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.626 [2024-11-28 02:21:50.156833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.885 [2024-11-28 02:21:50.355640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:16.885 [2024-11-28 02:21:50.355755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:17.145 02:21:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.145 02:21:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:17.145 02:21:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:17.145 02:21:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.145 02:21:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.742 malloc0 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.742 [2024-11-28 02:21:51.241471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:17.742 [2024-11-28 02:21:51.241572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.742 [2024-11-28 02:21:51.241627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:17.742 [2024-11-28 02:21:51.241658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.742 [2024-11-28 02:21:51.243780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.742 [2024-11-28 02:21:51.243855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:17.742 pt0 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.742 fef1d979-2a8e-45bd-a851-2e65c50db723 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.742 67fe6193-7545-474c-9e4a-7b422cba0c12 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.742 6f07e5c6-4313-43ff-83d3-9ef4faf75b4b 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.742 [2024-11-28 02:21:51.374089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 67fe6193-7545-474c-9e4a-7b422cba0c12 is claimed 00:06:17.742 [2024-11-28 02:21:51.374172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6f07e5c6-4313-43ff-83d3-9ef4faf75b4b is claimed 00:06:17.742 [2024-11-28 02:21:51.374297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:17.742 [2024-11-28 02:21:51.374312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:17.742 [2024-11-28 02:21:51.374567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:17.742 [2024-11-28 02:21:51.374737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:17.742 [2024-11-28 02:21:51.374748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:17.742 [2024-11-28 02:21:51.374902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:17.742 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.002 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:18.002 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:18.002 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:18.002 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.002 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.002 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.002 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.003 [2024-11-28 02:21:51.490160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.003 [2024-11-28 02:21:51.530004] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:18.003 [2024-11-28 02:21:51.530073] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '67fe6193-7545-474c-9e4a-7b422cba0c12' was resized: old size 131072, new size 204800 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.003 [2024-11-28 02:21:51.537908] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:18.003 [2024-11-28 02:21:51.537979] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6f07e5c6-4313-43ff-83d3-9ef4faf75b4b' was resized: old size 131072, new size 204800 00:06:18.003 [2024-11-28 02:21:51.538035] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.003 [2024-11-28 02:21:51.653800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:18.003 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.262 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:18.262 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:18.262 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:18.262 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:18.262 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.262 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.262 [2024-11-28 02:21:51.701549] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:18.262 [2024-11-28 02:21:51.701613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:18.262 [2024-11-28 02:21:51.701638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:18.262 [2024-11-28 02:21:51.701766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:18.262 [2024-11-28 02:21:51.701967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:18.262 [2024-11-28 02:21:51.702066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:18.262 [2024-11-28 02:21:51.702088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:18.262 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.262 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:18.262 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.262 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.262 [2024-11-28 02:21:51.713462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:18.262 [2024-11-28 02:21:51.713561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:18.262 [2024-11-28 02:21:51.713581] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:18.262 [2024-11-28 02:21:51.713594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:18.262 [2024-11-28 02:21:51.715711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:18.262 [2024-11-28 02:21:51.715751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:18.262 [2024-11-28 02:21:51.717361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 67fe6193-7545-474c-9e4a-7b422cba0c12 00:06:18.262 [2024-11-28 02:21:51.717438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 67fe6193-7545-474c-9e4a-7b422cba0c12 is claimed 00:06:18.263 [2024-11-28 02:21:51.717531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6f07e5c6-4313-43ff-83d3-9ef4faf75b4b 00:06:18.263 [2024-11-28 02:21:51.717548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6f07e5c6-4313-43ff-83d3-9ef4faf75b4b is claimed 00:06:18.263 [2024-11-28 02:21:51.717652] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 6f07e5c6-4313-43ff-83d3-9ef4faf75b4b (2) smaller than existing raid bdev Raid (3) 00:06:18.263 [2024-11-28 02:21:51.717672] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 67fe6193-7545-474c-9e4a-7b422cba0c12: File exists 00:06:18.263 [2024-11-28 02:21:51.717708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:18.263 [2024-11-28 02:21:51.717735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:18.263 [2024-11-28 02:21:51.717979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:18.263 [2024-11-28 02:21:51.718131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:18.263 [2024-11-28 02:21:51.718140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:18.263 [2024-11-28 02:21:51.718311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:18.263 pt0 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.263 [2024-11-28 02:21:51.741884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60090 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60090 ']' 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60090 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60090 00:06:18.263 killing process with pid 60090 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60090' 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60090 00:06:18.263 [2024-11-28 02:21:51.822526] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:18.263 [2024-11-28 02:21:51.822589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:18.263 [2024-11-28 02:21:51.822636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:18.263 [2024-11-28 02:21:51.822644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:18.263 02:21:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60090 00:06:19.643 [2024-11-28 02:21:53.202179] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:21.025 02:21:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:21.025 00:06:21.025 real 0m4.512s 00:06:21.025 user 0m4.718s 00:06:21.025 sys 0m0.575s 00:06:21.025 ************************************ 00:06:21.025 END TEST raid1_resize_superblock_test 00:06:21.025 ************************************ 00:06:21.025 02:21:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.025 02:21:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.025 02:21:54 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:21.025 02:21:54 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:21.025 02:21:54 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:21.025 02:21:54 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:21.025 02:21:54 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:21.025 02:21:54 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:21.025 02:21:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:21.025 02:21:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.025 02:21:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:21.025 ************************************ 00:06:21.025 START TEST raid_function_test_raid0 00:06:21.025 ************************************ 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:21.025 Process raid pid: 60196 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60196 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60196' 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60196 00:06:21.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60196 ']' 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.025 02:21:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:21.025 [2024-11-28 02:21:54.474528] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:21.025 [2024-11-28 02:21:54.474727] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.025 [2024-11-28 02:21:54.648338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.290 [2024-11-28 02:21:54.762694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.290 [2024-11-28 02:21:54.956125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:21.290 [2024-11-28 02:21:54.956239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:21.858 Base_1 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:21.858 Base_2 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:21.858 [2024-11-28 02:21:55.384476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:21.858 [2024-11-28 02:21:55.386182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:21.858 [2024-11-28 02:21:55.386246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:21.858 [2024-11-28 02:21:55.386258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:21.858 [2024-11-28 02:21:55.386493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:21.858 [2024-11-28 02:21:55.386631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:21.858 [2024-11-28 02:21:55.386640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:21.858 [2024-11-28 02:21:55.386796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:21.858 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:21.859 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:22.119 [2024-11-28 02:21:55.628133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:22.119 /dev/nbd0 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.119 1+0 records in 00:06:22.119 1+0 records out 00:06:22.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573019 s, 7.1 MB/s 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:22.119 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:22.379 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.379 { 00:06:22.379 "nbd_device": "/dev/nbd0", 00:06:22.379 "bdev_name": "raid" 00:06:22.379 } 00:06:22.379 ]' 00:06:22.379 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.379 { 00:06:22.379 "nbd_device": "/dev/nbd0", 00:06:22.379 "bdev_name": "raid" 00:06:22.379 } 00:06:22.379 ]' 00:06:22.379 02:21:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.379 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:22.379 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.379 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:22.380 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:22.640 4096+0 records in 00:06:22.640 4096+0 records out 00:06:22.640 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0343125 s, 61.1 MB/s 00:06:22.640 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:22.640 4096+0 records in 00:06:22.640 4096+0 records out 00:06:22.640 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.211964 s, 9.9 MB/s 00:06:22.640 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:22.640 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:22.900 128+0 records in 00:06:22.900 128+0 records out 00:06:22.900 65536 bytes (66 kB, 64 KiB) copied, 0.00122131 s, 53.7 MB/s 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:22.900 2035+0 records in 00:06:22.900 2035+0 records out 00:06:22.900 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0234346 s, 44.5 MB/s 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:22.900 456+0 records in 00:06:22.900 456+0 records out 00:06:22.900 233472 bytes (233 kB, 228 KiB) copied, 0.003942 s, 59.2 MB/s 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.900 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.161 [2024-11-28 02:21:56.655994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:23.161 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60196 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60196 ']' 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60196 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60196 00:06:23.422 killing process with pid 60196 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60196' 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60196 00:06:23.422 [2024-11-28 02:21:56.987135] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:23.422 [2024-11-28 02:21:56.987232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:23.422 [2024-11-28 02:21:56.987279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:23.422 [2024-11-28 02:21:56.987294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:23.422 02:21:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60196 00:06:23.682 [2024-11-28 02:21:57.186171] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:24.620 ************************************ 00:06:24.620 END TEST raid_function_test_raid0 00:06:24.620 ************************************ 00:06:24.620 02:21:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:24.620 00:06:24.620 real 0m3.864s 00:06:24.620 user 0m4.510s 00:06:24.620 sys 0m0.996s 00:06:24.620 02:21:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.620 02:21:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.880 02:21:58 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:24.880 02:21:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.880 02:21:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.880 02:21:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:24.880 ************************************ 00:06:24.880 START TEST raid_function_test_concat 00:06:24.880 ************************************ 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60325 00:06:24.880 Process raid pid: 60325 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60325' 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60325 00:06:24.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60325 ']' 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.880 02:21:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:24.880 [2024-11-28 02:21:58.404256] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:24.880 [2024-11-28 02:21:58.404479] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.140 [2024-11-28 02:21:58.577109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.140 [2024-11-28 02:21:58.689094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.399 [2024-11-28 02:21:58.887875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:25.399 [2024-11-28 02:21:58.888009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:25.659 Base_1 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:25.659 Base_2 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:25.659 [2024-11-28 02:21:59.316215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:25.659 [2024-11-28 02:21:59.317980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:25.659 [2024-11-28 02:21:59.318050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:25.659 [2024-11-28 02:21:59.318062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:25.659 [2024-11-28 02:21:59.318308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:25.659 [2024-11-28 02:21:59.318457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:25.659 [2024-11-28 02:21:59.318465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:25.659 [2024-11-28 02:21:59.318630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.659 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:25.920 [2024-11-28 02:21:59.555851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:25.920 /dev/nbd0 00:06:25.920 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.181 1+0 records in 00:06:26.181 1+0 records out 00:06:26.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310439 s, 13.2 MB/s 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.181 { 00:06:26.181 "nbd_device": "/dev/nbd0", 00:06:26.181 "bdev_name": "raid" 00:06:26.181 } 00:06:26.181 ]' 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.181 { 00:06:26.181 "nbd_device": "/dev/nbd0", 00:06:26.181 "bdev_name": "raid" 00:06:26.181 } 00:06:26.181 ]' 00:06:26.181 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:26.441 4096+0 records in 00:06:26.441 4096+0 records out 00:06:26.441 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.031944 s, 65.7 MB/s 00:06:26.441 02:21:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:26.701 4096+0 records in 00:06:26.701 4096+0 records out 00:06:26.701 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.185754 s, 11.3 MB/s 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:26.701 128+0 records in 00:06:26.701 128+0 records out 00:06:26.701 65536 bytes (66 kB, 64 KiB) copied, 0.00107718 s, 60.8 MB/s 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:26.701 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:26.701 2035+0 records in 00:06:26.702 2035+0 records out 00:06:26.702 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0130129 s, 80.1 MB/s 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:26.702 456+0 records in 00:06:26.702 456+0 records out 00:06:26.702 233472 bytes (233 kB, 228 KiB) copied, 0.00290149 s, 80.5 MB/s 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.702 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.962 [2024-11-28 02:22:00.459647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:26.962 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60325 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60325 ']' 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60325 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60325 00:06:27.222 killing process with pid 60325 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60325' 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60325 00:06:27.222 [2024-11-28 02:22:00.762932] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:27.222 [2024-11-28 02:22:00.763034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:27.222 [2024-11-28 02:22:00.763084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:27.222 [2024-11-28 02:22:00.763096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:27.222 02:22:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60325 00:06:27.483 [2024-11-28 02:22:00.960421] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:28.422 02:22:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:28.422 00:06:28.422 real 0m3.705s 00:06:28.422 user 0m4.270s 00:06:28.422 sys 0m0.944s 00:06:28.422 02:22:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.422 02:22:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:28.422 ************************************ 00:06:28.422 END TEST raid_function_test_concat 00:06:28.422 ************************************ 00:06:28.422 02:22:02 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:28.422 02:22:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.422 02:22:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.422 02:22:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:28.422 ************************************ 00:06:28.422 START TEST raid0_resize_test 00:06:28.422 ************************************ 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60441 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60441' 00:06:28.422 Process raid pid: 60441 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60441 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60441 ']' 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.422 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.423 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.423 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.682 [2024-11-28 02:22:02.170334] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:28.682 [2024-11-28 02:22:02.170432] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.682 [2024-11-28 02:22:02.344609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.941 [2024-11-28 02:22:02.455243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.201 [2024-11-28 02:22:02.652689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.201 [2024-11-28 02:22:02.652724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.461 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.461 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:29.461 02:22:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:29.461 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.461 02:22:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.461 Base_1 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.461 Base_2 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.461 [2024-11-28 02:22:03.018265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:29.461 [2024-11-28 02:22:03.019972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:29.461 [2024-11-28 02:22:03.020018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:29.461 [2024-11-28 02:22:03.020029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:29.461 [2024-11-28 02:22:03.020262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:29.461 [2024-11-28 02:22:03.020370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:29.461 [2024-11-28 02:22:03.020377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:29.461 [2024-11-28 02:22:03.020502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.461 [2024-11-28 02:22:03.030225] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:29.461 [2024-11-28 02:22:03.030288] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:29.461 true 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.461 [2024-11-28 02:22:03.046352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.461 [2024-11-28 02:22:03.094097] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:29.461 [2024-11-28 02:22:03.094157] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:29.461 [2024-11-28 02:22:03.094215] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:29.461 true 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.461 [2024-11-28 02:22:03.110225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:29.461 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60441 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60441 ']' 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60441 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60441 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60441' 00:06:29.720 killing process with pid 60441 00:06:29.720 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60441 00:06:29.720 [2024-11-28 02:22:03.190417] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:29.720 [2024-11-28 02:22:03.190525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:29.720 [2024-11-28 02:22:03.190594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:29.721 [2024-11-28 02:22:03.190637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:29.721 02:22:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60441 00:06:29.721 [2024-11-28 02:22:03.206802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:30.659 02:22:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:30.659 00:06:30.659 real 0m2.191s 00:06:30.659 user 0m2.311s 00:06:30.659 sys 0m0.345s 00:06:30.659 02:22:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.659 02:22:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.659 ************************************ 00:06:30.659 END TEST raid0_resize_test 00:06:30.659 ************************************ 00:06:30.659 02:22:04 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:30.659 02:22:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.659 02:22:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.659 02:22:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.919 ************************************ 00:06:30.919 START TEST raid1_resize_test 00:06:30.919 ************************************ 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60501 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60501' 00:06:30.919 Process raid pid: 60501 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60501 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60501 ']' 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.919 02:22:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.919 [2024-11-28 02:22:04.430537] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:30.919 [2024-11-28 02:22:04.430724] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.177 [2024-11-28 02:22:04.604880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.177 [2024-11-28 02:22:04.713242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.437 [2024-11-28 02:22:04.913253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.437 [2024-11-28 02:22:04.913369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 Base_1 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 Base_2 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 [2024-11-28 02:22:05.275957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:31.697 [2024-11-28 02:22:05.277705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:31.697 [2024-11-28 02:22:05.277759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:31.697 [2024-11-28 02:22:05.277770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:31.697 [2024-11-28 02:22:05.278020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:31.697 [2024-11-28 02:22:05.278136] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:31.697 [2024-11-28 02:22:05.278144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:31.697 [2024-11-28 02:22:05.278275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 [2024-11-28 02:22:05.287912] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:31.697 [2024-11-28 02:22:05.287949] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:31.697 true 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:31.697 [2024-11-28 02:22:05.300056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 [2024-11-28 02:22:05.351786] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:31.697 [2024-11-28 02:22:05.351849] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:31.697 [2024-11-28 02:22:05.351904] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:31.697 true 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.697 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.698 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:31.698 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.698 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.698 [2024-11-28 02:22:05.367912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60501 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60501 ']' 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60501 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60501 00:06:31.957 killing process with pid 60501 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60501' 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60501 00:06:31.957 [2024-11-28 02:22:05.453672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:31.957 [2024-11-28 02:22:05.453752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:31.957 02:22:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60501 00:06:31.957 [2024-11-28 02:22:05.454218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:31.957 [2024-11-28 02:22:05.454290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:31.957 [2024-11-28 02:22:05.470117] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:32.894 ************************************ 00:06:32.894 END TEST raid1_resize_test 00:06:32.894 ************************************ 00:06:32.894 02:22:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:32.894 00:06:32.894 real 0m2.189s 00:06:32.894 user 0m2.316s 00:06:32.894 sys 0m0.329s 00:06:32.894 02:22:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.894 02:22:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.175 02:22:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:33.175 02:22:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:33.175 02:22:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:33.175 02:22:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:33.175 02:22:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.175 02:22:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:33.175 ************************************ 00:06:33.175 START TEST raid_state_function_test 00:06:33.175 ************************************ 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:33.175 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:33.176 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60565 00:06:33.176 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:33.176 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60565' 00:06:33.176 Process raid pid: 60565 00:06:33.176 02:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60565 00:06:33.176 02:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60565 ']' 00:06:33.176 02:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.176 02:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.176 02:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.176 02:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.176 02:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.176 [2024-11-28 02:22:06.688151] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:33.176 [2024-11-28 02:22:06.688634] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.433 [2024-11-28 02:22:06.859027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.433 [2024-11-28 02:22:06.968527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.691 [2024-11-28 02:22:07.164338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.691 [2024-11-28 02:22:07.164469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.949 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.949 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:33.949 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.950 [2024-11-28 02:22:07.525101] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:33.950 [2024-11-28 02:22:07.525201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:33.950 [2024-11-28 02:22:07.525247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:33.950 [2024-11-28 02:22:07.525272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.950 "name": "Existed_Raid", 00:06:33.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.950 "strip_size_kb": 64, 00:06:33.950 "state": "configuring", 00:06:33.950 "raid_level": "raid0", 00:06:33.950 "superblock": false, 00:06:33.950 "num_base_bdevs": 2, 00:06:33.950 "num_base_bdevs_discovered": 0, 00:06:33.950 "num_base_bdevs_operational": 2, 00:06:33.950 "base_bdevs_list": [ 00:06:33.950 { 00:06:33.950 "name": "BaseBdev1", 00:06:33.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.950 "is_configured": false, 00:06:33.950 "data_offset": 0, 00:06:33.950 "data_size": 0 00:06:33.950 }, 00:06:33.950 { 00:06:33.950 "name": "BaseBdev2", 00:06:33.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.950 "is_configured": false, 00:06:33.950 "data_offset": 0, 00:06:33.950 "data_size": 0 00:06:33.950 } 00:06:33.950 ] 00:06:33.950 }' 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.950 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.517 [2024-11-28 02:22:07.920353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:34.517 [2024-11-28 02:22:07.920389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.517 [2024-11-28 02:22:07.928339] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:34.517 [2024-11-28 02:22:07.928420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:34.517 [2024-11-28 02:22:07.928465] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:34.517 [2024-11-28 02:22:07.928492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.517 [2024-11-28 02:22:07.972225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:34.517 BaseBdev1 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.517 02:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.517 [ 00:06:34.517 { 00:06:34.517 "name": "BaseBdev1", 00:06:34.517 "aliases": [ 00:06:34.517 "3c7c5215-50cd-4c6a-8754-be293d274b57" 00:06:34.517 ], 00:06:34.517 "product_name": "Malloc disk", 00:06:34.517 "block_size": 512, 00:06:34.517 "num_blocks": 65536, 00:06:34.517 "uuid": "3c7c5215-50cd-4c6a-8754-be293d274b57", 00:06:34.517 "assigned_rate_limits": { 00:06:34.517 "rw_ios_per_sec": 0, 00:06:34.517 "rw_mbytes_per_sec": 0, 00:06:34.517 "r_mbytes_per_sec": 0, 00:06:34.517 "w_mbytes_per_sec": 0 00:06:34.517 }, 00:06:34.517 "claimed": true, 00:06:34.517 "claim_type": "exclusive_write", 00:06:34.517 "zoned": false, 00:06:34.517 "supported_io_types": { 00:06:34.517 "read": true, 00:06:34.517 "write": true, 00:06:34.517 "unmap": true, 00:06:34.517 "flush": true, 00:06:34.517 "reset": true, 00:06:34.517 "nvme_admin": false, 00:06:34.517 "nvme_io": false, 00:06:34.517 "nvme_io_md": false, 00:06:34.517 "write_zeroes": true, 00:06:34.517 "zcopy": true, 00:06:34.517 "get_zone_info": false, 00:06:34.517 "zone_management": false, 00:06:34.517 "zone_append": false, 00:06:34.517 "compare": false, 00:06:34.517 "compare_and_write": false, 00:06:34.517 "abort": true, 00:06:34.517 "seek_hole": false, 00:06:34.517 "seek_data": false, 00:06:34.517 "copy": true, 00:06:34.517 "nvme_iov_md": false 00:06:34.517 }, 00:06:34.517 "memory_domains": [ 00:06:34.517 { 00:06:34.517 "dma_device_id": "system", 00:06:34.517 "dma_device_type": 1 00:06:34.517 }, 00:06:34.517 { 00:06:34.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.517 "dma_device_type": 2 00:06:34.517 } 00:06:34.517 ], 00:06:34.517 "driver_specific": {} 00:06:34.517 } 00:06:34.517 ] 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.517 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.517 "name": "Existed_Raid", 00:06:34.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.517 "strip_size_kb": 64, 00:06:34.517 "state": "configuring", 00:06:34.517 "raid_level": "raid0", 00:06:34.517 "superblock": false, 00:06:34.517 "num_base_bdevs": 2, 00:06:34.517 "num_base_bdevs_discovered": 1, 00:06:34.518 "num_base_bdevs_operational": 2, 00:06:34.518 "base_bdevs_list": [ 00:06:34.518 { 00:06:34.518 "name": "BaseBdev1", 00:06:34.518 "uuid": "3c7c5215-50cd-4c6a-8754-be293d274b57", 00:06:34.518 "is_configured": true, 00:06:34.518 "data_offset": 0, 00:06:34.518 "data_size": 65536 00:06:34.518 }, 00:06:34.518 { 00:06:34.518 "name": "BaseBdev2", 00:06:34.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.518 "is_configured": false, 00:06:34.518 "data_offset": 0, 00:06:34.518 "data_size": 0 00:06:34.518 } 00:06:34.518 ] 00:06:34.518 }' 00:06:34.518 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.518 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.778 [2024-11-28 02:22:08.411507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:34.778 [2024-11-28 02:22:08.411548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.778 [2024-11-28 02:22:08.423518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:34.778 [2024-11-28 02:22:08.425293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:34.778 [2024-11-28 02:22:08.425379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.778 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.037 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:35.037 "name": "Existed_Raid", 00:06:35.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.037 "strip_size_kb": 64, 00:06:35.037 "state": "configuring", 00:06:35.037 "raid_level": "raid0", 00:06:35.037 "superblock": false, 00:06:35.037 "num_base_bdevs": 2, 00:06:35.037 "num_base_bdevs_discovered": 1, 00:06:35.037 "num_base_bdevs_operational": 2, 00:06:35.037 "base_bdevs_list": [ 00:06:35.037 { 00:06:35.037 "name": "BaseBdev1", 00:06:35.037 "uuid": "3c7c5215-50cd-4c6a-8754-be293d274b57", 00:06:35.037 "is_configured": true, 00:06:35.037 "data_offset": 0, 00:06:35.037 "data_size": 65536 00:06:35.037 }, 00:06:35.037 { 00:06:35.037 "name": "BaseBdev2", 00:06:35.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.037 "is_configured": false, 00:06:35.037 "data_offset": 0, 00:06:35.037 "data_size": 0 00:06:35.037 } 00:06:35.037 ] 00:06:35.037 }' 00:06:35.037 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:35.037 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.298 [2024-11-28 02:22:08.839492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:35.298 [2024-11-28 02:22:08.839620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:35.298 [2024-11-28 02:22:08.839647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:35.298 [2024-11-28 02:22:08.839960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:35.298 [2024-11-28 02:22:08.840170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:35.298 [2024-11-28 02:22:08.840217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:35.298 [2024-11-28 02:22:08.840502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.298 BaseBdev2 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.298 [ 00:06:35.298 { 00:06:35.298 "name": "BaseBdev2", 00:06:35.298 "aliases": [ 00:06:35.298 "b6966424-845b-47d8-b5c4-c9d6cbc47e88" 00:06:35.298 ], 00:06:35.298 "product_name": "Malloc disk", 00:06:35.298 "block_size": 512, 00:06:35.298 "num_blocks": 65536, 00:06:35.298 "uuid": "b6966424-845b-47d8-b5c4-c9d6cbc47e88", 00:06:35.298 "assigned_rate_limits": { 00:06:35.298 "rw_ios_per_sec": 0, 00:06:35.298 "rw_mbytes_per_sec": 0, 00:06:35.298 "r_mbytes_per_sec": 0, 00:06:35.298 "w_mbytes_per_sec": 0 00:06:35.298 }, 00:06:35.298 "claimed": true, 00:06:35.298 "claim_type": "exclusive_write", 00:06:35.298 "zoned": false, 00:06:35.298 "supported_io_types": { 00:06:35.298 "read": true, 00:06:35.298 "write": true, 00:06:35.298 "unmap": true, 00:06:35.298 "flush": true, 00:06:35.298 "reset": true, 00:06:35.298 "nvme_admin": false, 00:06:35.298 "nvme_io": false, 00:06:35.298 "nvme_io_md": false, 00:06:35.298 "write_zeroes": true, 00:06:35.298 "zcopy": true, 00:06:35.298 "get_zone_info": false, 00:06:35.298 "zone_management": false, 00:06:35.298 "zone_append": false, 00:06:35.298 "compare": false, 00:06:35.298 "compare_and_write": false, 00:06:35.298 "abort": true, 00:06:35.298 "seek_hole": false, 00:06:35.298 "seek_data": false, 00:06:35.298 "copy": true, 00:06:35.298 "nvme_iov_md": false 00:06:35.298 }, 00:06:35.298 "memory_domains": [ 00:06:35.298 { 00:06:35.298 "dma_device_id": "system", 00:06:35.298 "dma_device_type": 1 00:06:35.298 }, 00:06:35.298 { 00:06:35.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.298 "dma_device_type": 2 00:06:35.298 } 00:06:35.298 ], 00:06:35.298 "driver_specific": {} 00:06:35.298 } 00:06:35.298 ] 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:35.298 "name": "Existed_Raid", 00:06:35.298 "uuid": "c00d456f-b815-41ab-a7b0-08ae6db47689", 00:06:35.298 "strip_size_kb": 64, 00:06:35.298 "state": "online", 00:06:35.298 "raid_level": "raid0", 00:06:35.298 "superblock": false, 00:06:35.298 "num_base_bdevs": 2, 00:06:35.298 "num_base_bdevs_discovered": 2, 00:06:35.298 "num_base_bdevs_operational": 2, 00:06:35.298 "base_bdevs_list": [ 00:06:35.298 { 00:06:35.298 "name": "BaseBdev1", 00:06:35.298 "uuid": "3c7c5215-50cd-4c6a-8754-be293d274b57", 00:06:35.298 "is_configured": true, 00:06:35.298 "data_offset": 0, 00:06:35.298 "data_size": 65536 00:06:35.298 }, 00:06:35.298 { 00:06:35.298 "name": "BaseBdev2", 00:06:35.298 "uuid": "b6966424-845b-47d8-b5c4-c9d6cbc47e88", 00:06:35.298 "is_configured": true, 00:06:35.298 "data_offset": 0, 00:06:35.298 "data_size": 65536 00:06:35.298 } 00:06:35.298 ] 00:06:35.298 }' 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:35.298 02:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.868 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:35.868 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:35.868 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:35.868 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:35.868 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:35.868 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:35.868 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:35.868 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:35.868 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.868 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.868 [2024-11-28 02:22:09.326960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:35.869 "name": "Existed_Raid", 00:06:35.869 "aliases": [ 00:06:35.869 "c00d456f-b815-41ab-a7b0-08ae6db47689" 00:06:35.869 ], 00:06:35.869 "product_name": "Raid Volume", 00:06:35.869 "block_size": 512, 00:06:35.869 "num_blocks": 131072, 00:06:35.869 "uuid": "c00d456f-b815-41ab-a7b0-08ae6db47689", 00:06:35.869 "assigned_rate_limits": { 00:06:35.869 "rw_ios_per_sec": 0, 00:06:35.869 "rw_mbytes_per_sec": 0, 00:06:35.869 "r_mbytes_per_sec": 0, 00:06:35.869 "w_mbytes_per_sec": 0 00:06:35.869 }, 00:06:35.869 "claimed": false, 00:06:35.869 "zoned": false, 00:06:35.869 "supported_io_types": { 00:06:35.869 "read": true, 00:06:35.869 "write": true, 00:06:35.869 "unmap": true, 00:06:35.869 "flush": true, 00:06:35.869 "reset": true, 00:06:35.869 "nvme_admin": false, 00:06:35.869 "nvme_io": false, 00:06:35.869 "nvme_io_md": false, 00:06:35.869 "write_zeroes": true, 00:06:35.869 "zcopy": false, 00:06:35.869 "get_zone_info": false, 00:06:35.869 "zone_management": false, 00:06:35.869 "zone_append": false, 00:06:35.869 "compare": false, 00:06:35.869 "compare_and_write": false, 00:06:35.869 "abort": false, 00:06:35.869 "seek_hole": false, 00:06:35.869 "seek_data": false, 00:06:35.869 "copy": false, 00:06:35.869 "nvme_iov_md": false 00:06:35.869 }, 00:06:35.869 "memory_domains": [ 00:06:35.869 { 00:06:35.869 "dma_device_id": "system", 00:06:35.869 "dma_device_type": 1 00:06:35.869 }, 00:06:35.869 { 00:06:35.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.869 "dma_device_type": 2 00:06:35.869 }, 00:06:35.869 { 00:06:35.869 "dma_device_id": "system", 00:06:35.869 "dma_device_type": 1 00:06:35.869 }, 00:06:35.869 { 00:06:35.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.869 "dma_device_type": 2 00:06:35.869 } 00:06:35.869 ], 00:06:35.869 "driver_specific": { 00:06:35.869 "raid": { 00:06:35.869 "uuid": "c00d456f-b815-41ab-a7b0-08ae6db47689", 00:06:35.869 "strip_size_kb": 64, 00:06:35.869 "state": "online", 00:06:35.869 "raid_level": "raid0", 00:06:35.869 "superblock": false, 00:06:35.869 "num_base_bdevs": 2, 00:06:35.869 "num_base_bdevs_discovered": 2, 00:06:35.869 "num_base_bdevs_operational": 2, 00:06:35.869 "base_bdevs_list": [ 00:06:35.869 { 00:06:35.869 "name": "BaseBdev1", 00:06:35.869 "uuid": "3c7c5215-50cd-4c6a-8754-be293d274b57", 00:06:35.869 "is_configured": true, 00:06:35.869 "data_offset": 0, 00:06:35.869 "data_size": 65536 00:06:35.869 }, 00:06:35.869 { 00:06:35.869 "name": "BaseBdev2", 00:06:35.869 "uuid": "b6966424-845b-47d8-b5c4-c9d6cbc47e88", 00:06:35.869 "is_configured": true, 00:06:35.869 "data_offset": 0, 00:06:35.869 "data_size": 65536 00:06:35.869 } 00:06:35.869 ] 00:06:35.869 } 00:06:35.869 } 00:06:35.869 }' 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:35.869 BaseBdev2' 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.869 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.869 [2024-11-28 02:22:09.502410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:35.869 [2024-11-28 02:22:09.502485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:35.869 [2024-11-28 02:22:09.502555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:36.129 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.129 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:36.129 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:36.129 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:36.129 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:36.130 "name": "Existed_Raid", 00:06:36.130 "uuid": "c00d456f-b815-41ab-a7b0-08ae6db47689", 00:06:36.130 "strip_size_kb": 64, 00:06:36.130 "state": "offline", 00:06:36.130 "raid_level": "raid0", 00:06:36.130 "superblock": false, 00:06:36.130 "num_base_bdevs": 2, 00:06:36.130 "num_base_bdevs_discovered": 1, 00:06:36.130 "num_base_bdevs_operational": 1, 00:06:36.130 "base_bdevs_list": [ 00:06:36.130 { 00:06:36.130 "name": null, 00:06:36.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.130 "is_configured": false, 00:06:36.130 "data_offset": 0, 00:06:36.130 "data_size": 65536 00:06:36.130 }, 00:06:36.130 { 00:06:36.130 "name": "BaseBdev2", 00:06:36.130 "uuid": "b6966424-845b-47d8-b5c4-c9d6cbc47e88", 00:06:36.130 "is_configured": true, 00:06:36.130 "data_offset": 0, 00:06:36.130 "data_size": 65536 00:06:36.130 } 00:06:36.130 ] 00:06:36.130 }' 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:36.130 02:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.390 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:36.390 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:36.390 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.390 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.390 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:36.390 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.390 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.650 [2024-11-28 02:22:10.083095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:36.650 [2024-11-28 02:22:10.083192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60565 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60565 ']' 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60565 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60565 00:06:36.650 killing process with pid 60565 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60565' 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60565 00:06:36.650 [2024-11-28 02:22:10.260147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:36.650 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60565 00:06:36.650 [2024-11-28 02:22:10.276598] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:38.032 ************************************ 00:06:38.032 END TEST raid_state_function_test 00:06:38.032 ************************************ 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:38.032 00:06:38.032 real 0m4.754s 00:06:38.032 user 0m6.859s 00:06:38.032 sys 0m0.717s 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.032 02:22:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:38.032 02:22:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:38.032 02:22:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.032 02:22:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:38.032 ************************************ 00:06:38.032 START TEST raid_state_function_test_sb 00:06:38.032 ************************************ 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:38.032 Process raid pid: 60807 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60807 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60807' 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60807 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60807 ']' 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.032 02:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.032 [2024-11-28 02:22:11.511815] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:38.032 [2024-11-28 02:22:11.511946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.032 [2024-11-28 02:22:11.684986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.292 [2024-11-28 02:22:11.794863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.552 [2024-11-28 02:22:11.988206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.552 [2024-11-28 02:22:11.988241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.812 [2024-11-28 02:22:12.334817] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:38.812 [2024-11-28 02:22:12.334875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:38.812 [2024-11-28 02:22:12.334885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:38.812 [2024-11-28 02:22:12.334895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.812 "name": "Existed_Raid", 00:06:38.812 "uuid": "5b9013ea-316c-497c-a4b3-b16674e50211", 00:06:38.812 "strip_size_kb": 64, 00:06:38.812 "state": "configuring", 00:06:38.812 "raid_level": "raid0", 00:06:38.812 "superblock": true, 00:06:38.812 "num_base_bdevs": 2, 00:06:38.812 "num_base_bdevs_discovered": 0, 00:06:38.812 "num_base_bdevs_operational": 2, 00:06:38.812 "base_bdevs_list": [ 00:06:38.812 { 00:06:38.812 "name": "BaseBdev1", 00:06:38.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.812 "is_configured": false, 00:06:38.812 "data_offset": 0, 00:06:38.812 "data_size": 0 00:06:38.812 }, 00:06:38.812 { 00:06:38.812 "name": "BaseBdev2", 00:06:38.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.812 "is_configured": false, 00:06:38.812 "data_offset": 0, 00:06:38.812 "data_size": 0 00:06:38.812 } 00:06:38.812 ] 00:06:38.812 }' 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.812 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.073 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:39.073 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.073 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.073 [2024-11-28 02:22:12.738063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:39.073 [2024-11-28 02:22:12.738145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:39.073 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.073 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:39.073 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.073 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.073 [2024-11-28 02:22:12.750061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:39.073 [2024-11-28 02:22:12.750155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:39.073 [2024-11-28 02:22:12.750182] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:39.073 [2024-11-28 02:22:12.750207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.333 [2024-11-28 02:22:12.794171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:39.333 BaseBdev1 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.333 [ 00:06:39.333 { 00:06:39.333 "name": "BaseBdev1", 00:06:39.333 "aliases": [ 00:06:39.333 "ab41c5f9-8272-41aa-b9b5-200b7680cb68" 00:06:39.333 ], 00:06:39.333 "product_name": "Malloc disk", 00:06:39.333 "block_size": 512, 00:06:39.333 "num_blocks": 65536, 00:06:39.333 "uuid": "ab41c5f9-8272-41aa-b9b5-200b7680cb68", 00:06:39.333 "assigned_rate_limits": { 00:06:39.333 "rw_ios_per_sec": 0, 00:06:39.333 "rw_mbytes_per_sec": 0, 00:06:39.333 "r_mbytes_per_sec": 0, 00:06:39.333 "w_mbytes_per_sec": 0 00:06:39.333 }, 00:06:39.333 "claimed": true, 00:06:39.333 "claim_type": "exclusive_write", 00:06:39.333 "zoned": false, 00:06:39.333 "supported_io_types": { 00:06:39.333 "read": true, 00:06:39.333 "write": true, 00:06:39.333 "unmap": true, 00:06:39.333 "flush": true, 00:06:39.333 "reset": true, 00:06:39.333 "nvme_admin": false, 00:06:39.333 "nvme_io": false, 00:06:39.333 "nvme_io_md": false, 00:06:39.333 "write_zeroes": true, 00:06:39.333 "zcopy": true, 00:06:39.333 "get_zone_info": false, 00:06:39.333 "zone_management": false, 00:06:39.333 "zone_append": false, 00:06:39.333 "compare": false, 00:06:39.333 "compare_and_write": false, 00:06:39.333 "abort": true, 00:06:39.333 "seek_hole": false, 00:06:39.333 "seek_data": false, 00:06:39.333 "copy": true, 00:06:39.333 "nvme_iov_md": false 00:06:39.333 }, 00:06:39.333 "memory_domains": [ 00:06:39.333 { 00:06:39.333 "dma_device_id": "system", 00:06:39.333 "dma_device_type": 1 00:06:39.333 }, 00:06:39.333 { 00:06:39.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.333 "dma_device_type": 2 00:06:39.333 } 00:06:39.333 ], 00:06:39.333 "driver_specific": {} 00:06:39.333 } 00:06:39.333 ] 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.333 "name": "Existed_Raid", 00:06:39.333 "uuid": "02953b09-f8e9-4d64-98ae-dd2e3d8a7393", 00:06:39.333 "strip_size_kb": 64, 00:06:39.333 "state": "configuring", 00:06:39.333 "raid_level": "raid0", 00:06:39.333 "superblock": true, 00:06:39.333 "num_base_bdevs": 2, 00:06:39.333 "num_base_bdevs_discovered": 1, 00:06:39.333 "num_base_bdevs_operational": 2, 00:06:39.333 "base_bdevs_list": [ 00:06:39.333 { 00:06:39.333 "name": "BaseBdev1", 00:06:39.333 "uuid": "ab41c5f9-8272-41aa-b9b5-200b7680cb68", 00:06:39.333 "is_configured": true, 00:06:39.333 "data_offset": 2048, 00:06:39.333 "data_size": 63488 00:06:39.333 }, 00:06:39.333 { 00:06:39.333 "name": "BaseBdev2", 00:06:39.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:39.333 "is_configured": false, 00:06:39.333 "data_offset": 0, 00:06:39.333 "data_size": 0 00:06:39.333 } 00:06:39.333 ] 00:06:39.333 }' 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.333 02:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.903 [2024-11-28 02:22:13.385208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:39.903 [2024-11-28 02:22:13.385340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.903 [2024-11-28 02:22:13.397227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:39.903 [2024-11-28 02:22:13.399029] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:39.903 [2024-11-28 02:22:13.399073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.903 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.903 "name": "Existed_Raid", 00:06:39.903 "uuid": "382cb1b7-595b-4c5d-8d1e-507a9e4890c3", 00:06:39.903 "strip_size_kb": 64, 00:06:39.903 "state": "configuring", 00:06:39.903 "raid_level": "raid0", 00:06:39.903 "superblock": true, 00:06:39.903 "num_base_bdevs": 2, 00:06:39.903 "num_base_bdevs_discovered": 1, 00:06:39.903 "num_base_bdevs_operational": 2, 00:06:39.903 "base_bdevs_list": [ 00:06:39.903 { 00:06:39.903 "name": "BaseBdev1", 00:06:39.903 "uuid": "ab41c5f9-8272-41aa-b9b5-200b7680cb68", 00:06:39.903 "is_configured": true, 00:06:39.903 "data_offset": 2048, 00:06:39.903 "data_size": 63488 00:06:39.903 }, 00:06:39.903 { 00:06:39.903 "name": "BaseBdev2", 00:06:39.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:39.903 "is_configured": false, 00:06:39.904 "data_offset": 0, 00:06:39.904 "data_size": 0 00:06:39.904 } 00:06:39.904 ] 00:06:39.904 }' 00:06:39.904 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.904 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.163 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:40.163 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.163 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.423 [2024-11-28 02:22:13.864408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:40.423 [2024-11-28 02:22:13.864678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:40.423 [2024-11-28 02:22:13.864694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:40.423 [2024-11-28 02:22:13.864986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:40.423 BaseBdev2 00:06:40.423 [2024-11-28 02:22:13.865152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:40.423 [2024-11-28 02:22:13.865231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:40.423 [2024-11-28 02:22:13.865372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.423 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.423 [ 00:06:40.423 { 00:06:40.423 "name": "BaseBdev2", 00:06:40.423 "aliases": [ 00:06:40.423 "6047e66f-58a2-45ab-9b5c-3c351ae19d3d" 00:06:40.423 ], 00:06:40.423 "product_name": "Malloc disk", 00:06:40.423 "block_size": 512, 00:06:40.423 "num_blocks": 65536, 00:06:40.423 "uuid": "6047e66f-58a2-45ab-9b5c-3c351ae19d3d", 00:06:40.423 "assigned_rate_limits": { 00:06:40.423 "rw_ios_per_sec": 0, 00:06:40.423 "rw_mbytes_per_sec": 0, 00:06:40.423 "r_mbytes_per_sec": 0, 00:06:40.423 "w_mbytes_per_sec": 0 00:06:40.423 }, 00:06:40.423 "claimed": true, 00:06:40.423 "claim_type": "exclusive_write", 00:06:40.423 "zoned": false, 00:06:40.423 "supported_io_types": { 00:06:40.423 "read": true, 00:06:40.423 "write": true, 00:06:40.423 "unmap": true, 00:06:40.423 "flush": true, 00:06:40.423 "reset": true, 00:06:40.423 "nvme_admin": false, 00:06:40.423 "nvme_io": false, 00:06:40.423 "nvme_io_md": false, 00:06:40.424 "write_zeroes": true, 00:06:40.424 "zcopy": true, 00:06:40.424 "get_zone_info": false, 00:06:40.424 "zone_management": false, 00:06:40.424 "zone_append": false, 00:06:40.424 "compare": false, 00:06:40.424 "compare_and_write": false, 00:06:40.424 "abort": true, 00:06:40.424 "seek_hole": false, 00:06:40.424 "seek_data": false, 00:06:40.424 "copy": true, 00:06:40.424 "nvme_iov_md": false 00:06:40.424 }, 00:06:40.424 "memory_domains": [ 00:06:40.424 { 00:06:40.424 "dma_device_id": "system", 00:06:40.424 "dma_device_type": 1 00:06:40.424 }, 00:06:40.424 { 00:06:40.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.424 "dma_device_type": 2 00:06:40.424 } 00:06:40.424 ], 00:06:40.424 "driver_specific": {} 00:06:40.424 } 00:06:40.424 ] 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.424 "name": "Existed_Raid", 00:06:40.424 "uuid": "382cb1b7-595b-4c5d-8d1e-507a9e4890c3", 00:06:40.424 "strip_size_kb": 64, 00:06:40.424 "state": "online", 00:06:40.424 "raid_level": "raid0", 00:06:40.424 "superblock": true, 00:06:40.424 "num_base_bdevs": 2, 00:06:40.424 "num_base_bdevs_discovered": 2, 00:06:40.424 "num_base_bdevs_operational": 2, 00:06:40.424 "base_bdevs_list": [ 00:06:40.424 { 00:06:40.424 "name": "BaseBdev1", 00:06:40.424 "uuid": "ab41c5f9-8272-41aa-b9b5-200b7680cb68", 00:06:40.424 "is_configured": true, 00:06:40.424 "data_offset": 2048, 00:06:40.424 "data_size": 63488 00:06:40.424 }, 00:06:40.424 { 00:06:40.424 "name": "BaseBdev2", 00:06:40.424 "uuid": "6047e66f-58a2-45ab-9b5c-3c351ae19d3d", 00:06:40.424 "is_configured": true, 00:06:40.424 "data_offset": 2048, 00:06:40.424 "data_size": 63488 00:06:40.424 } 00:06:40.424 ] 00:06:40.424 }' 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.424 02:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.689 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:40.689 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:40.689 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:40.689 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:40.689 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:40.689 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:40.689 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:40.690 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:40.690 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.690 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.690 [2024-11-28 02:22:14.288017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.690 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.690 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:40.690 "name": "Existed_Raid", 00:06:40.690 "aliases": [ 00:06:40.690 "382cb1b7-595b-4c5d-8d1e-507a9e4890c3" 00:06:40.690 ], 00:06:40.690 "product_name": "Raid Volume", 00:06:40.690 "block_size": 512, 00:06:40.690 "num_blocks": 126976, 00:06:40.690 "uuid": "382cb1b7-595b-4c5d-8d1e-507a9e4890c3", 00:06:40.690 "assigned_rate_limits": { 00:06:40.690 "rw_ios_per_sec": 0, 00:06:40.690 "rw_mbytes_per_sec": 0, 00:06:40.690 "r_mbytes_per_sec": 0, 00:06:40.690 "w_mbytes_per_sec": 0 00:06:40.690 }, 00:06:40.690 "claimed": false, 00:06:40.690 "zoned": false, 00:06:40.690 "supported_io_types": { 00:06:40.690 "read": true, 00:06:40.690 "write": true, 00:06:40.690 "unmap": true, 00:06:40.690 "flush": true, 00:06:40.690 "reset": true, 00:06:40.690 "nvme_admin": false, 00:06:40.690 "nvme_io": false, 00:06:40.690 "nvme_io_md": false, 00:06:40.690 "write_zeroes": true, 00:06:40.690 "zcopy": false, 00:06:40.690 "get_zone_info": false, 00:06:40.690 "zone_management": false, 00:06:40.690 "zone_append": false, 00:06:40.690 "compare": false, 00:06:40.690 "compare_and_write": false, 00:06:40.690 "abort": false, 00:06:40.690 "seek_hole": false, 00:06:40.690 "seek_data": false, 00:06:40.690 "copy": false, 00:06:40.690 "nvme_iov_md": false 00:06:40.690 }, 00:06:40.690 "memory_domains": [ 00:06:40.690 { 00:06:40.690 "dma_device_id": "system", 00:06:40.690 "dma_device_type": 1 00:06:40.690 }, 00:06:40.690 { 00:06:40.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.690 "dma_device_type": 2 00:06:40.690 }, 00:06:40.690 { 00:06:40.690 "dma_device_id": "system", 00:06:40.690 "dma_device_type": 1 00:06:40.690 }, 00:06:40.690 { 00:06:40.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.690 "dma_device_type": 2 00:06:40.690 } 00:06:40.690 ], 00:06:40.690 "driver_specific": { 00:06:40.690 "raid": { 00:06:40.690 "uuid": "382cb1b7-595b-4c5d-8d1e-507a9e4890c3", 00:06:40.690 "strip_size_kb": 64, 00:06:40.690 "state": "online", 00:06:40.690 "raid_level": "raid0", 00:06:40.690 "superblock": true, 00:06:40.690 "num_base_bdevs": 2, 00:06:40.690 "num_base_bdevs_discovered": 2, 00:06:40.690 "num_base_bdevs_operational": 2, 00:06:40.690 "base_bdevs_list": [ 00:06:40.690 { 00:06:40.690 "name": "BaseBdev1", 00:06:40.690 "uuid": "ab41c5f9-8272-41aa-b9b5-200b7680cb68", 00:06:40.690 "is_configured": true, 00:06:40.690 "data_offset": 2048, 00:06:40.690 "data_size": 63488 00:06:40.690 }, 00:06:40.690 { 00:06:40.690 "name": "BaseBdev2", 00:06:40.690 "uuid": "6047e66f-58a2-45ab-9b5c-3c351ae19d3d", 00:06:40.690 "is_configured": true, 00:06:40.690 "data_offset": 2048, 00:06:40.690 "data_size": 63488 00:06:40.690 } 00:06:40.690 ] 00:06:40.690 } 00:06:40.690 } 00:06:40.690 }' 00:06:40.690 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:40.690 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:40.690 BaseBdev2' 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.965 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.965 [2024-11-28 02:22:14.491433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:40.966 [2024-11-28 02:22:14.491472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:40.966 [2024-11-28 02:22:14.491524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.966 "name": "Existed_Raid", 00:06:40.966 "uuid": "382cb1b7-595b-4c5d-8d1e-507a9e4890c3", 00:06:40.966 "strip_size_kb": 64, 00:06:40.966 "state": "offline", 00:06:40.966 "raid_level": "raid0", 00:06:40.966 "superblock": true, 00:06:40.966 "num_base_bdevs": 2, 00:06:40.966 "num_base_bdevs_discovered": 1, 00:06:40.966 "num_base_bdevs_operational": 1, 00:06:40.966 "base_bdevs_list": [ 00:06:40.966 { 00:06:40.966 "name": null, 00:06:40.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:40.966 "is_configured": false, 00:06:40.966 "data_offset": 0, 00:06:40.966 "data_size": 63488 00:06:40.966 }, 00:06:40.966 { 00:06:40.966 "name": "BaseBdev2", 00:06:40.966 "uuid": "6047e66f-58a2-45ab-9b5c-3c351ae19d3d", 00:06:40.966 "is_configured": true, 00:06:40.966 "data_offset": 2048, 00:06:40.966 "data_size": 63488 00:06:40.966 } 00:06:40.966 ] 00:06:40.966 }' 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.966 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.545 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:41.545 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:41.545 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.545 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.545 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.545 02:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:41.545 02:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.545 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:41.545 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:41.545 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:41.545 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.546 [2024-11-28 02:22:15.036058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:41.546 [2024-11-28 02:22:15.036121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60807 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60807 ']' 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60807 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.546 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60807 00:06:41.811 killing process with pid 60807 00:06:41.811 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.811 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.811 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60807' 00:06:41.811 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60807 00:06:41.811 [2024-11-28 02:22:15.226725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:41.811 02:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60807 00:06:41.811 [2024-11-28 02:22:15.243735] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:42.750 02:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:42.750 00:06:42.750 real 0m4.886s 00:06:42.750 user 0m7.044s 00:06:42.750 sys 0m0.801s 00:06:42.750 02:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.750 02:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.750 ************************************ 00:06:42.750 END TEST raid_state_function_test_sb 00:06:42.750 ************************************ 00:06:42.750 02:22:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:42.750 02:22:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:42.750 02:22:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.750 02:22:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:42.750 ************************************ 00:06:42.750 START TEST raid_superblock_test 00:06:42.750 ************************************ 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61059 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61059 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61059 ']' 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.750 02:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.009 [2024-11-28 02:22:16.464328] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:43.009 [2024-11-28 02:22:16.464450] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61059 ] 00:06:43.009 [2024-11-28 02:22:16.621500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.268 [2024-11-28 02:22:16.734773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.268 [2024-11-28 02:22:16.934277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.268 [2024-11-28 02:22:16.934330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.839 malloc1 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.839 [2024-11-28 02:22:17.335935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:43.839 [2024-11-28 02:22:17.335989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.839 [2024-11-28 02:22:17.336009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:43.839 [2024-11-28 02:22:17.336018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.839 [2024-11-28 02:22:17.338090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.839 [2024-11-28 02:22:17.338124] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:43.839 pt1 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.839 malloc2 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.839 [2024-11-28 02:22:17.388785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:43.839 [2024-11-28 02:22:17.388850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.839 [2024-11-28 02:22:17.388874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:43.839 [2024-11-28 02:22:17.388883] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.839 [2024-11-28 02:22:17.390931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.839 [2024-11-28 02:22:17.390961] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:43.839 pt2 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.839 [2024-11-28 02:22:17.400819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:43.839 [2024-11-28 02:22:17.402526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:43.839 [2024-11-28 02:22:17.402680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:43.839 [2024-11-28 02:22:17.402701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:43.839 [2024-11-28 02:22:17.402939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:43.839 [2024-11-28 02:22:17.403086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:43.839 [2024-11-28 02:22:17.403101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:43.839 [2024-11-28 02:22:17.403240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:43.839 "name": "raid_bdev1", 00:06:43.839 "uuid": "b3934bba-b5d3-4388-9fbe-d13fa9b102aa", 00:06:43.839 "strip_size_kb": 64, 00:06:43.839 "state": "online", 00:06:43.839 "raid_level": "raid0", 00:06:43.839 "superblock": true, 00:06:43.839 "num_base_bdevs": 2, 00:06:43.839 "num_base_bdevs_discovered": 2, 00:06:43.839 "num_base_bdevs_operational": 2, 00:06:43.839 "base_bdevs_list": [ 00:06:43.839 { 00:06:43.839 "name": "pt1", 00:06:43.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:43.839 "is_configured": true, 00:06:43.839 "data_offset": 2048, 00:06:43.839 "data_size": 63488 00:06:43.839 }, 00:06:43.839 { 00:06:43.839 "name": "pt2", 00:06:43.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:43.839 "is_configured": true, 00:06:43.839 "data_offset": 2048, 00:06:43.839 "data_size": 63488 00:06:43.839 } 00:06:43.839 ] 00:06:43.839 }' 00:06:43.839 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:43.840 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.410 [2024-11-28 02:22:17.876298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:44.410 "name": "raid_bdev1", 00:06:44.410 "aliases": [ 00:06:44.410 "b3934bba-b5d3-4388-9fbe-d13fa9b102aa" 00:06:44.410 ], 00:06:44.410 "product_name": "Raid Volume", 00:06:44.410 "block_size": 512, 00:06:44.410 "num_blocks": 126976, 00:06:44.410 "uuid": "b3934bba-b5d3-4388-9fbe-d13fa9b102aa", 00:06:44.410 "assigned_rate_limits": { 00:06:44.410 "rw_ios_per_sec": 0, 00:06:44.410 "rw_mbytes_per_sec": 0, 00:06:44.410 "r_mbytes_per_sec": 0, 00:06:44.410 "w_mbytes_per_sec": 0 00:06:44.410 }, 00:06:44.410 "claimed": false, 00:06:44.410 "zoned": false, 00:06:44.410 "supported_io_types": { 00:06:44.410 "read": true, 00:06:44.410 "write": true, 00:06:44.410 "unmap": true, 00:06:44.410 "flush": true, 00:06:44.410 "reset": true, 00:06:44.410 "nvme_admin": false, 00:06:44.410 "nvme_io": false, 00:06:44.410 "nvme_io_md": false, 00:06:44.410 "write_zeroes": true, 00:06:44.410 "zcopy": false, 00:06:44.410 "get_zone_info": false, 00:06:44.410 "zone_management": false, 00:06:44.410 "zone_append": false, 00:06:44.410 "compare": false, 00:06:44.410 "compare_and_write": false, 00:06:44.410 "abort": false, 00:06:44.410 "seek_hole": false, 00:06:44.410 "seek_data": false, 00:06:44.410 "copy": false, 00:06:44.410 "nvme_iov_md": false 00:06:44.410 }, 00:06:44.410 "memory_domains": [ 00:06:44.410 { 00:06:44.410 "dma_device_id": "system", 00:06:44.410 "dma_device_type": 1 00:06:44.410 }, 00:06:44.410 { 00:06:44.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.410 "dma_device_type": 2 00:06:44.410 }, 00:06:44.410 { 00:06:44.410 "dma_device_id": "system", 00:06:44.410 "dma_device_type": 1 00:06:44.410 }, 00:06:44.410 { 00:06:44.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.410 "dma_device_type": 2 00:06:44.410 } 00:06:44.410 ], 00:06:44.410 "driver_specific": { 00:06:44.410 "raid": { 00:06:44.410 "uuid": "b3934bba-b5d3-4388-9fbe-d13fa9b102aa", 00:06:44.410 "strip_size_kb": 64, 00:06:44.410 "state": "online", 00:06:44.410 "raid_level": "raid0", 00:06:44.410 "superblock": true, 00:06:44.410 "num_base_bdevs": 2, 00:06:44.410 "num_base_bdevs_discovered": 2, 00:06:44.410 "num_base_bdevs_operational": 2, 00:06:44.410 "base_bdevs_list": [ 00:06:44.410 { 00:06:44.410 "name": "pt1", 00:06:44.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:44.410 "is_configured": true, 00:06:44.410 "data_offset": 2048, 00:06:44.410 "data_size": 63488 00:06:44.410 }, 00:06:44.410 { 00:06:44.410 "name": "pt2", 00:06:44.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:44.410 "is_configured": true, 00:06:44.410 "data_offset": 2048, 00:06:44.410 "data_size": 63488 00:06:44.410 } 00:06:44.410 ] 00:06:44.410 } 00:06:44.410 } 00:06:44.410 }' 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:44.410 pt2' 00:06:44.410 02:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.410 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:44.410 [2024-11-28 02:22:18.083851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b3934bba-b5d3-4388-9fbe-d13fa9b102aa 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b3934bba-b5d3-4388-9fbe-d13fa9b102aa ']' 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.671 [2024-11-28 02:22:18.131497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:44.671 [2024-11-28 02:22:18.131524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:44.671 [2024-11-28 02:22:18.131596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:44.671 [2024-11-28 02:22:18.131641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:44.671 [2024-11-28 02:22:18.131652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.671 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.671 [2024-11-28 02:22:18.251346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:44.671 [2024-11-28 02:22:18.253265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:44.671 [2024-11-28 02:22:18.253330] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:44.671 [2024-11-28 02:22:18.253370] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:44.671 [2024-11-28 02:22:18.253383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:44.671 [2024-11-28 02:22:18.253396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:44.671 request: 00:06:44.671 { 00:06:44.671 "name": "raid_bdev1", 00:06:44.671 "raid_level": "raid0", 00:06:44.671 "base_bdevs": [ 00:06:44.671 "malloc1", 00:06:44.671 "malloc2" 00:06:44.671 ], 00:06:44.671 "strip_size_kb": 64, 00:06:44.672 "superblock": false, 00:06:44.672 "method": "bdev_raid_create", 00:06:44.672 "req_id": 1 00:06:44.672 } 00:06:44.672 Got JSON-RPC error response 00:06:44.672 response: 00:06:44.672 { 00:06:44.672 "code": -17, 00:06:44.672 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:44.672 } 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.672 [2024-11-28 02:22:18.307240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:44.672 [2024-11-28 02:22:18.307283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.672 [2024-11-28 02:22:18.307298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:44.672 [2024-11-28 02:22:18.307308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.672 [2024-11-28 02:22:18.309488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.672 [2024-11-28 02:22:18.309522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:44.672 [2024-11-28 02:22:18.309594] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:44.672 [2024-11-28 02:22:18.309648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:44.672 pt1 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.672 "name": "raid_bdev1", 00:06:44.672 "uuid": "b3934bba-b5d3-4388-9fbe-d13fa9b102aa", 00:06:44.672 "strip_size_kb": 64, 00:06:44.672 "state": "configuring", 00:06:44.672 "raid_level": "raid0", 00:06:44.672 "superblock": true, 00:06:44.672 "num_base_bdevs": 2, 00:06:44.672 "num_base_bdevs_discovered": 1, 00:06:44.672 "num_base_bdevs_operational": 2, 00:06:44.672 "base_bdevs_list": [ 00:06:44.672 { 00:06:44.672 "name": "pt1", 00:06:44.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:44.672 "is_configured": true, 00:06:44.672 "data_offset": 2048, 00:06:44.672 "data_size": 63488 00:06:44.672 }, 00:06:44.672 { 00:06:44.672 "name": null, 00:06:44.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:44.672 "is_configured": false, 00:06:44.672 "data_offset": 2048, 00:06:44.672 "data_size": 63488 00:06:44.672 } 00:06:44.672 ] 00:06:44.672 }' 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.672 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.241 [2024-11-28 02:22:18.698638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:45.241 [2024-11-28 02:22:18.698725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.241 [2024-11-28 02:22:18.698754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:45.241 [2024-11-28 02:22:18.698769] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.241 [2024-11-28 02:22:18.699391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.241 [2024-11-28 02:22:18.699434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:45.241 [2024-11-28 02:22:18.699545] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:45.241 [2024-11-28 02:22:18.699582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:45.241 [2024-11-28 02:22:18.699723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:45.241 [2024-11-28 02:22:18.699741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:45.241 [2024-11-28 02:22:18.700057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:45.241 [2024-11-28 02:22:18.700238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:45.241 [2024-11-28 02:22:18.700265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:45.241 [2024-11-28 02:22:18.700462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.241 pt2 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.241 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.241 "name": "raid_bdev1", 00:06:45.241 "uuid": "b3934bba-b5d3-4388-9fbe-d13fa9b102aa", 00:06:45.241 "strip_size_kb": 64, 00:06:45.242 "state": "online", 00:06:45.242 "raid_level": "raid0", 00:06:45.242 "superblock": true, 00:06:45.242 "num_base_bdevs": 2, 00:06:45.242 "num_base_bdevs_discovered": 2, 00:06:45.242 "num_base_bdevs_operational": 2, 00:06:45.242 "base_bdevs_list": [ 00:06:45.242 { 00:06:45.242 "name": "pt1", 00:06:45.242 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:45.242 "is_configured": true, 00:06:45.242 "data_offset": 2048, 00:06:45.242 "data_size": 63488 00:06:45.242 }, 00:06:45.242 { 00:06:45.242 "name": "pt2", 00:06:45.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:45.242 "is_configured": true, 00:06:45.242 "data_offset": 2048, 00:06:45.242 "data_size": 63488 00:06:45.242 } 00:06:45.242 ] 00:06:45.242 }' 00:06:45.242 02:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.242 02:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.501 [2024-11-28 02:22:19.122131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.501 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:45.501 "name": "raid_bdev1", 00:06:45.502 "aliases": [ 00:06:45.502 "b3934bba-b5d3-4388-9fbe-d13fa9b102aa" 00:06:45.502 ], 00:06:45.502 "product_name": "Raid Volume", 00:06:45.502 "block_size": 512, 00:06:45.502 "num_blocks": 126976, 00:06:45.502 "uuid": "b3934bba-b5d3-4388-9fbe-d13fa9b102aa", 00:06:45.502 "assigned_rate_limits": { 00:06:45.502 "rw_ios_per_sec": 0, 00:06:45.502 "rw_mbytes_per_sec": 0, 00:06:45.502 "r_mbytes_per_sec": 0, 00:06:45.502 "w_mbytes_per_sec": 0 00:06:45.502 }, 00:06:45.502 "claimed": false, 00:06:45.502 "zoned": false, 00:06:45.502 "supported_io_types": { 00:06:45.502 "read": true, 00:06:45.502 "write": true, 00:06:45.502 "unmap": true, 00:06:45.502 "flush": true, 00:06:45.502 "reset": true, 00:06:45.502 "nvme_admin": false, 00:06:45.502 "nvme_io": false, 00:06:45.502 "nvme_io_md": false, 00:06:45.502 "write_zeroes": true, 00:06:45.502 "zcopy": false, 00:06:45.502 "get_zone_info": false, 00:06:45.502 "zone_management": false, 00:06:45.502 "zone_append": false, 00:06:45.502 "compare": false, 00:06:45.502 "compare_and_write": false, 00:06:45.502 "abort": false, 00:06:45.502 "seek_hole": false, 00:06:45.502 "seek_data": false, 00:06:45.502 "copy": false, 00:06:45.502 "nvme_iov_md": false 00:06:45.502 }, 00:06:45.502 "memory_domains": [ 00:06:45.502 { 00:06:45.502 "dma_device_id": "system", 00:06:45.502 "dma_device_type": 1 00:06:45.502 }, 00:06:45.502 { 00:06:45.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.502 "dma_device_type": 2 00:06:45.502 }, 00:06:45.502 { 00:06:45.502 "dma_device_id": "system", 00:06:45.502 "dma_device_type": 1 00:06:45.502 }, 00:06:45.502 { 00:06:45.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.502 "dma_device_type": 2 00:06:45.502 } 00:06:45.502 ], 00:06:45.502 "driver_specific": { 00:06:45.502 "raid": { 00:06:45.502 "uuid": "b3934bba-b5d3-4388-9fbe-d13fa9b102aa", 00:06:45.502 "strip_size_kb": 64, 00:06:45.502 "state": "online", 00:06:45.502 "raid_level": "raid0", 00:06:45.502 "superblock": true, 00:06:45.502 "num_base_bdevs": 2, 00:06:45.502 "num_base_bdevs_discovered": 2, 00:06:45.502 "num_base_bdevs_operational": 2, 00:06:45.502 "base_bdevs_list": [ 00:06:45.502 { 00:06:45.502 "name": "pt1", 00:06:45.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:45.502 "is_configured": true, 00:06:45.502 "data_offset": 2048, 00:06:45.502 "data_size": 63488 00:06:45.502 }, 00:06:45.502 { 00:06:45.502 "name": "pt2", 00:06:45.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:45.502 "is_configured": true, 00:06:45.502 "data_offset": 2048, 00:06:45.502 "data_size": 63488 00:06:45.502 } 00:06:45.502 ] 00:06:45.502 } 00:06:45.502 } 00:06:45.502 }' 00:06:45.502 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:45.761 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:45.761 pt2' 00:06:45.761 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:45.762 [2024-11-28 02:22:19.337685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b3934bba-b5d3-4388-9fbe-d13fa9b102aa '!=' b3934bba-b5d3-4388-9fbe-d13fa9b102aa ']' 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61059 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61059 ']' 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61059 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61059 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.762 killing process with pid 61059 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61059' 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61059 00:06:45.762 [2024-11-28 02:22:19.419040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:45.762 02:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61059 00:06:45.762 [2024-11-28 02:22:19.419119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.762 [2024-11-28 02:22:19.419187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.762 [2024-11-28 02:22:19.419197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:46.021 [2024-11-28 02:22:19.610485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:47.400 02:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:47.400 00:06:47.400 real 0m4.311s 00:06:47.400 user 0m6.023s 00:06:47.400 sys 0m0.715s 00:06:47.400 02:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.400 02:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.400 ************************************ 00:06:47.400 END TEST raid_superblock_test 00:06:47.400 ************************************ 00:06:47.400 02:22:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:47.400 02:22:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:47.400 02:22:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.400 02:22:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.400 ************************************ 00:06:47.400 START TEST raid_read_error_test 00:06:47.400 ************************************ 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uA126d2SYY 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61265 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61265 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61265 ']' 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.400 02:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.400 [2024-11-28 02:22:20.857553] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:47.400 [2024-11-28 02:22:20.857657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61265 ] 00:06:47.400 [2024-11-28 02:22:21.028437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.660 [2024-11-28 02:22:21.134220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.660 [2024-11-28 02:22:21.322097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.660 [2024-11-28 02:22:21.322160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 BaseBdev1_malloc 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 true 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 [2024-11-28 02:22:21.720968] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:48.231 [2024-11-28 02:22:21.721034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.231 [2024-11-28 02:22:21.721052] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:48.231 [2024-11-28 02:22:21.721063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.231 [2024-11-28 02:22:21.723060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.231 [2024-11-28 02:22:21.723094] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:48.231 BaseBdev1 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 BaseBdev2_malloc 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 true 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 [2024-11-28 02:22:21.786857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:48.231 [2024-11-28 02:22:21.786907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.231 [2024-11-28 02:22:21.786932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:48.231 [2024-11-28 02:22:21.786943] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.231 [2024-11-28 02:22:21.788954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.231 [2024-11-28 02:22:21.788993] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:48.231 BaseBdev2 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 [2024-11-28 02:22:21.798895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:48.231 [2024-11-28 02:22:21.800735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:48.231 [2024-11-28 02:22:21.800949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:48.231 [2024-11-28 02:22:21.800967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:48.231 [2024-11-28 02:22:21.801188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:48.231 [2024-11-28 02:22:21.801347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:48.231 [2024-11-28 02:22:21.801375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:48.231 [2024-11-28 02:22:21.801525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.231 "name": "raid_bdev1", 00:06:48.231 "uuid": "d7ce8422-35b0-4de1-8f3a-480335bb7bd9", 00:06:48.231 "strip_size_kb": 64, 00:06:48.231 "state": "online", 00:06:48.231 "raid_level": "raid0", 00:06:48.231 "superblock": true, 00:06:48.231 "num_base_bdevs": 2, 00:06:48.231 "num_base_bdevs_discovered": 2, 00:06:48.231 "num_base_bdevs_operational": 2, 00:06:48.231 "base_bdevs_list": [ 00:06:48.231 { 00:06:48.231 "name": "BaseBdev1", 00:06:48.231 "uuid": "8793ded9-050d-5828-9437-479025ffc650", 00:06:48.231 "is_configured": true, 00:06:48.231 "data_offset": 2048, 00:06:48.231 "data_size": 63488 00:06:48.231 }, 00:06:48.231 { 00:06:48.231 "name": "BaseBdev2", 00:06:48.231 "uuid": "8d1e9c9a-6d40-501d-9e60-187b540f3f21", 00:06:48.231 "is_configured": true, 00:06:48.231 "data_offset": 2048, 00:06:48.231 "data_size": 63488 00:06:48.231 } 00:06:48.231 ] 00:06:48.231 }' 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.231 02:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.800 02:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:48.800 02:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:48.800 [2024-11-28 02:22:22.323326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.739 "name": "raid_bdev1", 00:06:49.739 "uuid": "d7ce8422-35b0-4de1-8f3a-480335bb7bd9", 00:06:49.739 "strip_size_kb": 64, 00:06:49.739 "state": "online", 00:06:49.739 "raid_level": "raid0", 00:06:49.739 "superblock": true, 00:06:49.739 "num_base_bdevs": 2, 00:06:49.739 "num_base_bdevs_discovered": 2, 00:06:49.739 "num_base_bdevs_operational": 2, 00:06:49.739 "base_bdevs_list": [ 00:06:49.739 { 00:06:49.739 "name": "BaseBdev1", 00:06:49.739 "uuid": "8793ded9-050d-5828-9437-479025ffc650", 00:06:49.739 "is_configured": true, 00:06:49.739 "data_offset": 2048, 00:06:49.739 "data_size": 63488 00:06:49.739 }, 00:06:49.739 { 00:06:49.739 "name": "BaseBdev2", 00:06:49.739 "uuid": "8d1e9c9a-6d40-501d-9e60-187b540f3f21", 00:06:49.739 "is_configured": true, 00:06:49.739 "data_offset": 2048, 00:06:49.739 "data_size": 63488 00:06:49.739 } 00:06:49.739 ] 00:06:49.739 }' 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.739 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.309 [2024-11-28 02:22:23.685440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:50.309 [2024-11-28 02:22:23.685543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:50.309 [2024-11-28 02:22:23.688228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.309 [2024-11-28 02:22:23.688328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.309 [2024-11-28 02:22:23.688381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.309 [2024-11-28 02:22:23.688426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:50.309 { 00:06:50.309 "results": [ 00:06:50.309 { 00:06:50.309 "job": "raid_bdev1", 00:06:50.309 "core_mask": "0x1", 00:06:50.309 "workload": "randrw", 00:06:50.309 "percentage": 50, 00:06:50.309 "status": "finished", 00:06:50.309 "queue_depth": 1, 00:06:50.309 "io_size": 131072, 00:06:50.309 "runtime": 1.362992, 00:06:50.309 "iops": 15942.866869358, 00:06:50.309 "mibps": 1992.85835866975, 00:06:50.309 "io_failed": 1, 00:06:50.309 "io_timeout": 0, 00:06:50.309 "avg_latency_us": 86.77083007210636, 00:06:50.309 "min_latency_us": 24.929257641921396, 00:06:50.309 "max_latency_us": 1824.419213973799 00:06:50.309 } 00:06:50.309 ], 00:06:50.309 "core_count": 1 00:06:50.309 } 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61265 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61265 ']' 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61265 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61265 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.309 killing process with pid 61265 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61265' 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61265 00:06:50.309 [2024-11-28 02:22:23.734533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.309 02:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61265 00:06:50.309 [2024-11-28 02:22:23.861681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.690 02:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:51.690 02:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:51.690 02:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uA126d2SYY 00:06:51.690 02:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:06:51.690 02:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:51.690 ************************************ 00:06:51.690 END TEST raid_read_error_test 00:06:51.690 ************************************ 00:06:51.690 02:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:51.690 02:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:51.690 02:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:06:51.690 00:06:51.690 real 0m4.241s 00:06:51.690 user 0m5.042s 00:06:51.690 sys 0m0.520s 00:06:51.690 02:22:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.690 02:22:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.690 02:22:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:51.690 02:22:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:51.690 02:22:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.690 02:22:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.690 ************************************ 00:06:51.690 START TEST raid_write_error_test 00:06:51.690 ************************************ 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Qx1J2AQYo8 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61411 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61411 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61411 ']' 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.690 02:22:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.690 [2024-11-28 02:22:25.165540] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:51.690 [2024-11-28 02:22:25.165656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61411 ] 00:06:51.690 [2024-11-28 02:22:25.330232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.948 [2024-11-28 02:22:25.439415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.211 [2024-11-28 02:22:25.626522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.211 [2024-11-28 02:22:25.626580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.472 BaseBdev1_malloc 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.472 true 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.472 [2024-11-28 02:22:26.093470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:52.472 [2024-11-28 02:22:26.093524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.472 [2024-11-28 02:22:26.093543] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:52.472 [2024-11-28 02:22:26.093553] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.472 [2024-11-28 02:22:26.095557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.472 [2024-11-28 02:22:26.095600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:52.472 BaseBdev1 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:52.472 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:52.473 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.473 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.473 BaseBdev2_malloc 00:06:52.473 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.473 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:52.473 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.473 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.732 true 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.732 [2024-11-28 02:22:26.158967] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:52.732 [2024-11-28 02:22:26.159019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.732 [2024-11-28 02:22:26.159049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:52.732 [2024-11-28 02:22:26.159058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.732 [2024-11-28 02:22:26.161093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.732 [2024-11-28 02:22:26.161133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:52.732 BaseBdev2 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.732 [2024-11-28 02:22:26.171003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:52.732 [2024-11-28 02:22:26.172757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:52.732 [2024-11-28 02:22:26.172944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:52.732 [2024-11-28 02:22:26.172961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:52.732 [2024-11-28 02:22:26.173180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:52.732 [2024-11-28 02:22:26.173345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:52.732 [2024-11-28 02:22:26.173358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:52.732 [2024-11-28 02:22:26.173503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.732 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.732 "name": "raid_bdev1", 00:06:52.732 "uuid": "c56f6c5b-de8c-4b47-9592-afbdaef0b2e7", 00:06:52.732 "strip_size_kb": 64, 00:06:52.732 "state": "online", 00:06:52.732 "raid_level": "raid0", 00:06:52.732 "superblock": true, 00:06:52.732 "num_base_bdevs": 2, 00:06:52.732 "num_base_bdevs_discovered": 2, 00:06:52.732 "num_base_bdevs_operational": 2, 00:06:52.732 "base_bdevs_list": [ 00:06:52.732 { 00:06:52.732 "name": "BaseBdev1", 00:06:52.732 "uuid": "a24dce4a-d76c-59a9-9d7b-a08785c68453", 00:06:52.732 "is_configured": true, 00:06:52.732 "data_offset": 2048, 00:06:52.732 "data_size": 63488 00:06:52.732 }, 00:06:52.732 { 00:06:52.732 "name": "BaseBdev2", 00:06:52.732 "uuid": "e3700625-5d22-53ac-82b7-3d743f77b12b", 00:06:52.732 "is_configured": true, 00:06:52.733 "data_offset": 2048, 00:06:52.733 "data_size": 63488 00:06:52.733 } 00:06:52.733 ] 00:06:52.733 }' 00:06:52.733 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.733 02:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.992 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:52.992 02:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:53.253 [2024-11-28 02:22:26.723391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.193 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.194 "name": "raid_bdev1", 00:06:54.194 "uuid": "c56f6c5b-de8c-4b47-9592-afbdaef0b2e7", 00:06:54.194 "strip_size_kb": 64, 00:06:54.194 "state": "online", 00:06:54.194 "raid_level": "raid0", 00:06:54.194 "superblock": true, 00:06:54.194 "num_base_bdevs": 2, 00:06:54.194 "num_base_bdevs_discovered": 2, 00:06:54.194 "num_base_bdevs_operational": 2, 00:06:54.194 "base_bdevs_list": [ 00:06:54.194 { 00:06:54.194 "name": "BaseBdev1", 00:06:54.194 "uuid": "a24dce4a-d76c-59a9-9d7b-a08785c68453", 00:06:54.194 "is_configured": true, 00:06:54.194 "data_offset": 2048, 00:06:54.194 "data_size": 63488 00:06:54.194 }, 00:06:54.194 { 00:06:54.194 "name": "BaseBdev2", 00:06:54.194 "uuid": "e3700625-5d22-53ac-82b7-3d743f77b12b", 00:06:54.194 "is_configured": true, 00:06:54.194 "data_offset": 2048, 00:06:54.194 "data_size": 63488 00:06:54.194 } 00:06:54.194 ] 00:06:54.194 }' 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.194 02:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.454 02:22:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:54.454 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.454 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.454 [2024-11-28 02:22:28.085186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:54.454 [2024-11-28 02:22:28.085283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:54.454 [2024-11-28 02:22:28.087997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.454 [2024-11-28 02:22:28.088078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.454 [2024-11-28 02:22:28.088129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.454 [2024-11-28 02:22:28.088213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:54.454 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.454 { 00:06:54.454 "results": [ 00:06:54.454 { 00:06:54.454 "job": "raid_bdev1", 00:06:54.454 "core_mask": "0x1", 00:06:54.454 "workload": "randrw", 00:06:54.454 "percentage": 50, 00:06:54.454 "status": "finished", 00:06:54.454 "queue_depth": 1, 00:06:54.454 "io_size": 131072, 00:06:54.454 "runtime": 1.362792, 00:06:54.454 "iops": 16632.03188747806, 00:06:54.454 "mibps": 2079.0039859347576, 00:06:54.454 "io_failed": 1, 00:06:54.454 "io_timeout": 0, 00:06:54.454 "avg_latency_us": 83.10566714630257, 00:06:54.454 "min_latency_us": 24.817467248908297, 00:06:54.454 "max_latency_us": 1359.3711790393013 00:06:54.454 } 00:06:54.454 ], 00:06:54.454 "core_count": 1 00:06:54.454 } 00:06:54.454 02:22:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61411 00:06:54.454 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61411 ']' 00:06:54.454 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61411 00:06:54.454 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:06:54.454 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.454 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61411 00:06:54.715 killing process with pid 61411 00:06:54.715 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.715 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.715 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61411' 00:06:54.715 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61411 00:06:54.715 [2024-11-28 02:22:28.132742] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.715 02:22:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61411 00:06:54.715 [2024-11-28 02:22:28.260266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.097 02:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:56.097 02:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:56.097 02:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Qx1J2AQYo8 00:06:56.097 02:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:06:56.097 02:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:56.097 02:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:56.097 02:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:56.097 02:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:06:56.097 00:06:56.097 real 0m4.331s 00:06:56.097 user 0m5.250s 00:06:56.097 sys 0m0.525s 00:06:56.097 02:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.097 ************************************ 00:06:56.097 END TEST raid_write_error_test 00:06:56.097 ************************************ 00:06:56.097 02:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.097 02:22:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:56.097 02:22:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:56.097 02:22:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:56.098 02:22:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.098 02:22:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.098 ************************************ 00:06:56.098 START TEST raid_state_function_test 00:06:56.098 ************************************ 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:56.098 Process raid pid: 61549 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61549 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61549' 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61549 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61549 ']' 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.098 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.098 [2024-11-28 02:22:29.555504] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:56.098 [2024-11-28 02:22:29.555698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.098 [2024-11-28 02:22:29.730402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.358 [2024-11-28 02:22:29.837121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.358 [2024-11-28 02:22:30.031785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.358 [2024-11-28 02:22:30.031869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.927 [2024-11-28 02:22:30.410865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:56.927 [2024-11-28 02:22:30.411002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:56.927 [2024-11-28 02:22:30.411035] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.927 [2024-11-28 02:22:30.411059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.927 "name": "Existed_Raid", 00:06:56.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.927 "strip_size_kb": 64, 00:06:56.927 "state": "configuring", 00:06:56.927 "raid_level": "concat", 00:06:56.927 "superblock": false, 00:06:56.927 "num_base_bdevs": 2, 00:06:56.927 "num_base_bdevs_discovered": 0, 00:06:56.927 "num_base_bdevs_operational": 2, 00:06:56.927 "base_bdevs_list": [ 00:06:56.927 { 00:06:56.927 "name": "BaseBdev1", 00:06:56.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.927 "is_configured": false, 00:06:56.927 "data_offset": 0, 00:06:56.927 "data_size": 0 00:06:56.927 }, 00:06:56.927 { 00:06:56.927 "name": "BaseBdev2", 00:06:56.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.927 "is_configured": false, 00:06:56.927 "data_offset": 0, 00:06:56.927 "data_size": 0 00:06:56.927 } 00:06:56.927 ] 00:06:56.927 }' 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.927 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.187 [2024-11-28 02:22:30.846056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:57.187 [2024-11-28 02:22:30.846138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.187 [2024-11-28 02:22:30.858051] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:57.187 [2024-11-28 02:22:30.858131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:57.187 [2024-11-28 02:22:30.858158] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:57.187 [2024-11-28 02:22:30.858183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.187 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.447 [2024-11-28 02:22:30.904130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:57.447 BaseBdev1 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.447 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.447 [ 00:06:57.447 { 00:06:57.447 "name": "BaseBdev1", 00:06:57.447 "aliases": [ 00:06:57.447 "b33de818-10e2-4ad9-bcd2-14830233c1a6" 00:06:57.447 ], 00:06:57.447 "product_name": "Malloc disk", 00:06:57.447 "block_size": 512, 00:06:57.448 "num_blocks": 65536, 00:06:57.448 "uuid": "b33de818-10e2-4ad9-bcd2-14830233c1a6", 00:06:57.448 "assigned_rate_limits": { 00:06:57.448 "rw_ios_per_sec": 0, 00:06:57.448 "rw_mbytes_per_sec": 0, 00:06:57.448 "r_mbytes_per_sec": 0, 00:06:57.448 "w_mbytes_per_sec": 0 00:06:57.448 }, 00:06:57.448 "claimed": true, 00:06:57.448 "claim_type": "exclusive_write", 00:06:57.448 "zoned": false, 00:06:57.448 "supported_io_types": { 00:06:57.448 "read": true, 00:06:57.448 "write": true, 00:06:57.448 "unmap": true, 00:06:57.448 "flush": true, 00:06:57.448 "reset": true, 00:06:57.448 "nvme_admin": false, 00:06:57.448 "nvme_io": false, 00:06:57.448 "nvme_io_md": false, 00:06:57.448 "write_zeroes": true, 00:06:57.448 "zcopy": true, 00:06:57.448 "get_zone_info": false, 00:06:57.448 "zone_management": false, 00:06:57.448 "zone_append": false, 00:06:57.448 "compare": false, 00:06:57.448 "compare_and_write": false, 00:06:57.448 "abort": true, 00:06:57.448 "seek_hole": false, 00:06:57.448 "seek_data": false, 00:06:57.448 "copy": true, 00:06:57.448 "nvme_iov_md": false 00:06:57.448 }, 00:06:57.448 "memory_domains": [ 00:06:57.448 { 00:06:57.448 "dma_device_id": "system", 00:06:57.448 "dma_device_type": 1 00:06:57.448 }, 00:06:57.448 { 00:06:57.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.448 "dma_device_type": 2 00:06:57.448 } 00:06:57.448 ], 00:06:57.448 "driver_specific": {} 00:06:57.448 } 00:06:57.448 ] 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.448 "name": "Existed_Raid", 00:06:57.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.448 "strip_size_kb": 64, 00:06:57.448 "state": "configuring", 00:06:57.448 "raid_level": "concat", 00:06:57.448 "superblock": false, 00:06:57.448 "num_base_bdevs": 2, 00:06:57.448 "num_base_bdevs_discovered": 1, 00:06:57.448 "num_base_bdevs_operational": 2, 00:06:57.448 "base_bdevs_list": [ 00:06:57.448 { 00:06:57.448 "name": "BaseBdev1", 00:06:57.448 "uuid": "b33de818-10e2-4ad9-bcd2-14830233c1a6", 00:06:57.448 "is_configured": true, 00:06:57.448 "data_offset": 0, 00:06:57.448 "data_size": 65536 00:06:57.448 }, 00:06:57.448 { 00:06:57.448 "name": "BaseBdev2", 00:06:57.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.448 "is_configured": false, 00:06:57.448 "data_offset": 0, 00:06:57.448 "data_size": 0 00:06:57.448 } 00:06:57.448 ] 00:06:57.448 }' 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.448 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.709 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:57.709 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.709 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.709 [2024-11-28 02:22:31.371409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:57.709 [2024-11-28 02:22:31.371508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:57.709 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.709 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:57.709 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.709 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.709 [2024-11-28 02:22:31.383423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:57.709 [2024-11-28 02:22:31.385178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:57.709 [2024-11-28 02:22:31.385270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.968 "name": "Existed_Raid", 00:06:57.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.968 "strip_size_kb": 64, 00:06:57.968 "state": "configuring", 00:06:57.968 "raid_level": "concat", 00:06:57.968 "superblock": false, 00:06:57.968 "num_base_bdevs": 2, 00:06:57.968 "num_base_bdevs_discovered": 1, 00:06:57.968 "num_base_bdevs_operational": 2, 00:06:57.968 "base_bdevs_list": [ 00:06:57.968 { 00:06:57.968 "name": "BaseBdev1", 00:06:57.968 "uuid": "b33de818-10e2-4ad9-bcd2-14830233c1a6", 00:06:57.968 "is_configured": true, 00:06:57.968 "data_offset": 0, 00:06:57.968 "data_size": 65536 00:06:57.968 }, 00:06:57.968 { 00:06:57.968 "name": "BaseBdev2", 00:06:57.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.968 "is_configured": false, 00:06:57.968 "data_offset": 0, 00:06:57.968 "data_size": 0 00:06:57.968 } 00:06:57.968 ] 00:06:57.968 }' 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.968 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.227 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:58.227 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.227 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.228 [2024-11-28 02:22:31.850542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:58.228 [2024-11-28 02:22:31.850675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:58.228 [2024-11-28 02:22:31.850699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:58.228 [2024-11-28 02:22:31.851017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:58.228 [2024-11-28 02:22:31.851245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:58.228 [2024-11-28 02:22:31.851293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:58.228 [2024-11-28 02:22:31.851567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.228 BaseBdev2 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.228 [ 00:06:58.228 { 00:06:58.228 "name": "BaseBdev2", 00:06:58.228 "aliases": [ 00:06:58.228 "d7f027a7-5958-42e5-ba97-8b1c5140d7a7" 00:06:58.228 ], 00:06:58.228 "product_name": "Malloc disk", 00:06:58.228 "block_size": 512, 00:06:58.228 "num_blocks": 65536, 00:06:58.228 "uuid": "d7f027a7-5958-42e5-ba97-8b1c5140d7a7", 00:06:58.228 "assigned_rate_limits": { 00:06:58.228 "rw_ios_per_sec": 0, 00:06:58.228 "rw_mbytes_per_sec": 0, 00:06:58.228 "r_mbytes_per_sec": 0, 00:06:58.228 "w_mbytes_per_sec": 0 00:06:58.228 }, 00:06:58.228 "claimed": true, 00:06:58.228 "claim_type": "exclusive_write", 00:06:58.228 "zoned": false, 00:06:58.228 "supported_io_types": { 00:06:58.228 "read": true, 00:06:58.228 "write": true, 00:06:58.228 "unmap": true, 00:06:58.228 "flush": true, 00:06:58.228 "reset": true, 00:06:58.228 "nvme_admin": false, 00:06:58.228 "nvme_io": false, 00:06:58.228 "nvme_io_md": false, 00:06:58.228 "write_zeroes": true, 00:06:58.228 "zcopy": true, 00:06:58.228 "get_zone_info": false, 00:06:58.228 "zone_management": false, 00:06:58.228 "zone_append": false, 00:06:58.228 "compare": false, 00:06:58.228 "compare_and_write": false, 00:06:58.228 "abort": true, 00:06:58.228 "seek_hole": false, 00:06:58.228 "seek_data": false, 00:06:58.228 "copy": true, 00:06:58.228 "nvme_iov_md": false 00:06:58.228 }, 00:06:58.228 "memory_domains": [ 00:06:58.228 { 00:06:58.228 "dma_device_id": "system", 00:06:58.228 "dma_device_type": 1 00:06:58.228 }, 00:06:58.228 { 00:06:58.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.228 "dma_device_type": 2 00:06:58.228 } 00:06:58.228 ], 00:06:58.228 "driver_specific": {} 00:06:58.228 } 00:06:58.228 ] 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.228 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.488 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.488 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.488 "name": "Existed_Raid", 00:06:58.488 "uuid": "48fdafd1-5002-4959-9982-c01fe2a08519", 00:06:58.488 "strip_size_kb": 64, 00:06:58.488 "state": "online", 00:06:58.488 "raid_level": "concat", 00:06:58.488 "superblock": false, 00:06:58.488 "num_base_bdevs": 2, 00:06:58.488 "num_base_bdevs_discovered": 2, 00:06:58.488 "num_base_bdevs_operational": 2, 00:06:58.488 "base_bdevs_list": [ 00:06:58.488 { 00:06:58.488 "name": "BaseBdev1", 00:06:58.488 "uuid": "b33de818-10e2-4ad9-bcd2-14830233c1a6", 00:06:58.488 "is_configured": true, 00:06:58.488 "data_offset": 0, 00:06:58.488 "data_size": 65536 00:06:58.488 }, 00:06:58.488 { 00:06:58.488 "name": "BaseBdev2", 00:06:58.488 "uuid": "d7f027a7-5958-42e5-ba97-8b1c5140d7a7", 00:06:58.488 "is_configured": true, 00:06:58.488 "data_offset": 0, 00:06:58.488 "data_size": 65536 00:06:58.488 } 00:06:58.488 ] 00:06:58.488 }' 00:06:58.488 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.488 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.748 [2024-11-28 02:22:32.278133] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.748 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:58.749 "name": "Existed_Raid", 00:06:58.749 "aliases": [ 00:06:58.749 "48fdafd1-5002-4959-9982-c01fe2a08519" 00:06:58.749 ], 00:06:58.749 "product_name": "Raid Volume", 00:06:58.749 "block_size": 512, 00:06:58.749 "num_blocks": 131072, 00:06:58.749 "uuid": "48fdafd1-5002-4959-9982-c01fe2a08519", 00:06:58.749 "assigned_rate_limits": { 00:06:58.749 "rw_ios_per_sec": 0, 00:06:58.749 "rw_mbytes_per_sec": 0, 00:06:58.749 "r_mbytes_per_sec": 0, 00:06:58.749 "w_mbytes_per_sec": 0 00:06:58.749 }, 00:06:58.749 "claimed": false, 00:06:58.749 "zoned": false, 00:06:58.749 "supported_io_types": { 00:06:58.749 "read": true, 00:06:58.749 "write": true, 00:06:58.749 "unmap": true, 00:06:58.749 "flush": true, 00:06:58.749 "reset": true, 00:06:58.749 "nvme_admin": false, 00:06:58.749 "nvme_io": false, 00:06:58.749 "nvme_io_md": false, 00:06:58.749 "write_zeroes": true, 00:06:58.749 "zcopy": false, 00:06:58.749 "get_zone_info": false, 00:06:58.749 "zone_management": false, 00:06:58.749 "zone_append": false, 00:06:58.749 "compare": false, 00:06:58.749 "compare_and_write": false, 00:06:58.749 "abort": false, 00:06:58.749 "seek_hole": false, 00:06:58.749 "seek_data": false, 00:06:58.749 "copy": false, 00:06:58.749 "nvme_iov_md": false 00:06:58.749 }, 00:06:58.749 "memory_domains": [ 00:06:58.749 { 00:06:58.749 "dma_device_id": "system", 00:06:58.749 "dma_device_type": 1 00:06:58.749 }, 00:06:58.749 { 00:06:58.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.749 "dma_device_type": 2 00:06:58.749 }, 00:06:58.749 { 00:06:58.749 "dma_device_id": "system", 00:06:58.749 "dma_device_type": 1 00:06:58.749 }, 00:06:58.749 { 00:06:58.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.749 "dma_device_type": 2 00:06:58.749 } 00:06:58.749 ], 00:06:58.749 "driver_specific": { 00:06:58.749 "raid": { 00:06:58.749 "uuid": "48fdafd1-5002-4959-9982-c01fe2a08519", 00:06:58.749 "strip_size_kb": 64, 00:06:58.749 "state": "online", 00:06:58.749 "raid_level": "concat", 00:06:58.749 "superblock": false, 00:06:58.749 "num_base_bdevs": 2, 00:06:58.749 "num_base_bdevs_discovered": 2, 00:06:58.749 "num_base_bdevs_operational": 2, 00:06:58.749 "base_bdevs_list": [ 00:06:58.749 { 00:06:58.749 "name": "BaseBdev1", 00:06:58.749 "uuid": "b33de818-10e2-4ad9-bcd2-14830233c1a6", 00:06:58.749 "is_configured": true, 00:06:58.749 "data_offset": 0, 00:06:58.749 "data_size": 65536 00:06:58.749 }, 00:06:58.749 { 00:06:58.749 "name": "BaseBdev2", 00:06:58.749 "uuid": "d7f027a7-5958-42e5-ba97-8b1c5140d7a7", 00:06:58.749 "is_configured": true, 00:06:58.749 "data_offset": 0, 00:06:58.749 "data_size": 65536 00:06:58.749 } 00:06:58.749 ] 00:06:58.749 } 00:06:58.749 } 00:06:58.749 }' 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:58.749 BaseBdev2' 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.749 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.018 [2024-11-28 02:22:32.509478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:59.018 [2024-11-28 02:22:32.509559] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:59.018 [2024-11-28 02:22:32.509613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.018 "name": "Existed_Raid", 00:06:59.018 "uuid": "48fdafd1-5002-4959-9982-c01fe2a08519", 00:06:59.018 "strip_size_kb": 64, 00:06:59.018 "state": "offline", 00:06:59.018 "raid_level": "concat", 00:06:59.018 "superblock": false, 00:06:59.018 "num_base_bdevs": 2, 00:06:59.018 "num_base_bdevs_discovered": 1, 00:06:59.018 "num_base_bdevs_operational": 1, 00:06:59.018 "base_bdevs_list": [ 00:06:59.018 { 00:06:59.018 "name": null, 00:06:59.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.018 "is_configured": false, 00:06:59.018 "data_offset": 0, 00:06:59.018 "data_size": 65536 00:06:59.018 }, 00:06:59.018 { 00:06:59.018 "name": "BaseBdev2", 00:06:59.018 "uuid": "d7f027a7-5958-42e5-ba97-8b1c5140d7a7", 00:06:59.018 "is_configured": true, 00:06:59.018 "data_offset": 0, 00:06:59.018 "data_size": 65536 00:06:59.018 } 00:06:59.018 ] 00:06:59.018 }' 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.018 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.609 [2024-11-28 02:22:33.059999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:59.609 [2024-11-28 02:22:33.060119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61549 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61549 ']' 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61549 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61549 00:06:59.609 killing process with pid 61549 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61549' 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61549 00:06:59.609 [2024-11-28 02:22:33.242296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.609 02:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61549 00:06:59.609 [2024-11-28 02:22:33.258352] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:00.988 00:07:00.988 real 0m4.861s 00:07:00.988 user 0m7.023s 00:07:00.988 sys 0m0.740s 00:07:00.988 ************************************ 00:07:00.988 END TEST raid_state_function_test 00:07:00.988 ************************************ 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.988 02:22:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:00.988 02:22:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:00.988 02:22:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.988 02:22:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.988 ************************************ 00:07:00.988 START TEST raid_state_function_test_sb 00:07:00.988 ************************************ 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61802 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61802' 00:07:00.988 Process raid pid: 61802 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61802 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61802 ']' 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.988 02:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.988 [2024-11-28 02:22:34.478261] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:00.988 [2024-11-28 02:22:34.478413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.248 [2024-11-28 02:22:34.679775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.248 [2024-11-28 02:22:34.792173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.508 [2024-11-28 02:22:34.984783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.508 [2024-11-28 02:22:34.984825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.767 [2024-11-28 02:22:35.291857] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:01.767 [2024-11-28 02:22:35.291907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:01.767 [2024-11-28 02:22:35.291927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:01.767 [2024-11-28 02:22:35.291937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.767 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.767 "name": "Existed_Raid", 00:07:01.767 "uuid": "ce20d56c-64e5-498b-9f5a-704a4b7a2c7e", 00:07:01.767 "strip_size_kb": 64, 00:07:01.767 "state": "configuring", 00:07:01.767 "raid_level": "concat", 00:07:01.767 "superblock": true, 00:07:01.767 "num_base_bdevs": 2, 00:07:01.767 "num_base_bdevs_discovered": 0, 00:07:01.767 "num_base_bdevs_operational": 2, 00:07:01.767 "base_bdevs_list": [ 00:07:01.767 { 00:07:01.767 "name": "BaseBdev1", 00:07:01.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.767 "is_configured": false, 00:07:01.767 "data_offset": 0, 00:07:01.768 "data_size": 0 00:07:01.768 }, 00:07:01.768 { 00:07:01.768 "name": "BaseBdev2", 00:07:01.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.768 "is_configured": false, 00:07:01.768 "data_offset": 0, 00:07:01.768 "data_size": 0 00:07:01.768 } 00:07:01.768 ] 00:07:01.768 }' 00:07:01.768 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.768 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.336 [2024-11-28 02:22:35.711068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:02.336 [2024-11-28 02:22:35.711106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.336 [2024-11-28 02:22:35.723058] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.336 [2024-11-28 02:22:35.723095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.336 [2024-11-28 02:22:35.723104] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.336 [2024-11-28 02:22:35.723115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.336 [2024-11-28 02:22:35.769091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.336 BaseBdev1 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:02.336 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.337 [ 00:07:02.337 { 00:07:02.337 "name": "BaseBdev1", 00:07:02.337 "aliases": [ 00:07:02.337 "5fa693a1-9158-4659-8af6-6761503a745c" 00:07:02.337 ], 00:07:02.337 "product_name": "Malloc disk", 00:07:02.337 "block_size": 512, 00:07:02.337 "num_blocks": 65536, 00:07:02.337 "uuid": "5fa693a1-9158-4659-8af6-6761503a745c", 00:07:02.337 "assigned_rate_limits": { 00:07:02.337 "rw_ios_per_sec": 0, 00:07:02.337 "rw_mbytes_per_sec": 0, 00:07:02.337 "r_mbytes_per_sec": 0, 00:07:02.337 "w_mbytes_per_sec": 0 00:07:02.337 }, 00:07:02.337 "claimed": true, 00:07:02.337 "claim_type": "exclusive_write", 00:07:02.337 "zoned": false, 00:07:02.337 "supported_io_types": { 00:07:02.337 "read": true, 00:07:02.337 "write": true, 00:07:02.337 "unmap": true, 00:07:02.337 "flush": true, 00:07:02.337 "reset": true, 00:07:02.337 "nvme_admin": false, 00:07:02.337 "nvme_io": false, 00:07:02.337 "nvme_io_md": false, 00:07:02.337 "write_zeroes": true, 00:07:02.337 "zcopy": true, 00:07:02.337 "get_zone_info": false, 00:07:02.337 "zone_management": false, 00:07:02.337 "zone_append": false, 00:07:02.337 "compare": false, 00:07:02.337 "compare_and_write": false, 00:07:02.337 "abort": true, 00:07:02.337 "seek_hole": false, 00:07:02.337 "seek_data": false, 00:07:02.337 "copy": true, 00:07:02.337 "nvme_iov_md": false 00:07:02.337 }, 00:07:02.337 "memory_domains": [ 00:07:02.337 { 00:07:02.337 "dma_device_id": "system", 00:07:02.337 "dma_device_type": 1 00:07:02.337 }, 00:07:02.337 { 00:07:02.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.337 "dma_device_type": 2 00:07:02.337 } 00:07:02.337 ], 00:07:02.337 "driver_specific": {} 00:07:02.337 } 00:07:02.337 ] 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.337 "name": "Existed_Raid", 00:07:02.337 "uuid": "a183ecfe-ef0b-417d-b14b-29f09f3b74ac", 00:07:02.337 "strip_size_kb": 64, 00:07:02.337 "state": "configuring", 00:07:02.337 "raid_level": "concat", 00:07:02.337 "superblock": true, 00:07:02.337 "num_base_bdevs": 2, 00:07:02.337 "num_base_bdevs_discovered": 1, 00:07:02.337 "num_base_bdevs_operational": 2, 00:07:02.337 "base_bdevs_list": [ 00:07:02.337 { 00:07:02.337 "name": "BaseBdev1", 00:07:02.337 "uuid": "5fa693a1-9158-4659-8af6-6761503a745c", 00:07:02.337 "is_configured": true, 00:07:02.337 "data_offset": 2048, 00:07:02.337 "data_size": 63488 00:07:02.337 }, 00:07:02.337 { 00:07:02.337 "name": "BaseBdev2", 00:07:02.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.337 "is_configured": false, 00:07:02.337 "data_offset": 0, 00:07:02.337 "data_size": 0 00:07:02.337 } 00:07:02.337 ] 00:07:02.337 }' 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.337 02:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 [2024-11-28 02:22:36.196411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:02.597 [2024-11-28 02:22:36.196470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 [2024-11-28 02:22:36.208444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.597 [2024-11-28 02:22:36.210310] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.597 [2024-11-28 02:22:36.210348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.597 "name": "Existed_Raid", 00:07:02.597 "uuid": "bd29d4df-3f57-4eed-af9d-bbca73de90e4", 00:07:02.597 "strip_size_kb": 64, 00:07:02.597 "state": "configuring", 00:07:02.597 "raid_level": "concat", 00:07:02.597 "superblock": true, 00:07:02.597 "num_base_bdevs": 2, 00:07:02.597 "num_base_bdevs_discovered": 1, 00:07:02.597 "num_base_bdevs_operational": 2, 00:07:02.597 "base_bdevs_list": [ 00:07:02.597 { 00:07:02.597 "name": "BaseBdev1", 00:07:02.597 "uuid": "5fa693a1-9158-4659-8af6-6761503a745c", 00:07:02.597 "is_configured": true, 00:07:02.597 "data_offset": 2048, 00:07:02.597 "data_size": 63488 00:07:02.597 }, 00:07:02.597 { 00:07:02.597 "name": "BaseBdev2", 00:07:02.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.597 "is_configured": false, 00:07:02.597 "data_offset": 0, 00:07:02.597 "data_size": 0 00:07:02.597 } 00:07:02.597 ] 00:07:02.597 }' 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.597 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.165 [2024-11-28 02:22:36.596106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:03.165 [2024-11-28 02:22:36.596355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:03.165 [2024-11-28 02:22:36.596370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:03.165 [2024-11-28 02:22:36.596641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:03.165 [2024-11-28 02:22:36.596802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:03.165 [2024-11-28 02:22:36.596824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:03.165 BaseBdev2 00:07:03.165 [2024-11-28 02:22:36.596989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.165 [ 00:07:03.165 { 00:07:03.165 "name": "BaseBdev2", 00:07:03.165 "aliases": [ 00:07:03.165 "1e9d5f23-8846-4d9f-8661-ee0b111424e1" 00:07:03.165 ], 00:07:03.165 "product_name": "Malloc disk", 00:07:03.165 "block_size": 512, 00:07:03.165 "num_blocks": 65536, 00:07:03.165 "uuid": "1e9d5f23-8846-4d9f-8661-ee0b111424e1", 00:07:03.165 "assigned_rate_limits": { 00:07:03.165 "rw_ios_per_sec": 0, 00:07:03.165 "rw_mbytes_per_sec": 0, 00:07:03.165 "r_mbytes_per_sec": 0, 00:07:03.165 "w_mbytes_per_sec": 0 00:07:03.165 }, 00:07:03.165 "claimed": true, 00:07:03.165 "claim_type": "exclusive_write", 00:07:03.165 "zoned": false, 00:07:03.165 "supported_io_types": { 00:07:03.165 "read": true, 00:07:03.165 "write": true, 00:07:03.165 "unmap": true, 00:07:03.165 "flush": true, 00:07:03.165 "reset": true, 00:07:03.165 "nvme_admin": false, 00:07:03.165 "nvme_io": false, 00:07:03.165 "nvme_io_md": false, 00:07:03.165 "write_zeroes": true, 00:07:03.165 "zcopy": true, 00:07:03.165 "get_zone_info": false, 00:07:03.165 "zone_management": false, 00:07:03.165 "zone_append": false, 00:07:03.165 "compare": false, 00:07:03.165 "compare_and_write": false, 00:07:03.165 "abort": true, 00:07:03.165 "seek_hole": false, 00:07:03.165 "seek_data": false, 00:07:03.165 "copy": true, 00:07:03.165 "nvme_iov_md": false 00:07:03.165 }, 00:07:03.165 "memory_domains": [ 00:07:03.165 { 00:07:03.165 "dma_device_id": "system", 00:07:03.165 "dma_device_type": 1 00:07:03.165 }, 00:07:03.165 { 00:07:03.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.165 "dma_device_type": 2 00:07:03.165 } 00:07:03.165 ], 00:07:03.165 "driver_specific": {} 00:07:03.165 } 00:07:03.165 ] 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.165 "name": "Existed_Raid", 00:07:03.165 "uuid": "bd29d4df-3f57-4eed-af9d-bbca73de90e4", 00:07:03.165 "strip_size_kb": 64, 00:07:03.165 "state": "online", 00:07:03.165 "raid_level": "concat", 00:07:03.165 "superblock": true, 00:07:03.165 "num_base_bdevs": 2, 00:07:03.165 "num_base_bdevs_discovered": 2, 00:07:03.165 "num_base_bdevs_operational": 2, 00:07:03.165 "base_bdevs_list": [ 00:07:03.165 { 00:07:03.165 "name": "BaseBdev1", 00:07:03.165 "uuid": "5fa693a1-9158-4659-8af6-6761503a745c", 00:07:03.165 "is_configured": true, 00:07:03.165 "data_offset": 2048, 00:07:03.165 "data_size": 63488 00:07:03.165 }, 00:07:03.165 { 00:07:03.165 "name": "BaseBdev2", 00:07:03.165 "uuid": "1e9d5f23-8846-4d9f-8661-ee0b111424e1", 00:07:03.165 "is_configured": true, 00:07:03.165 "data_offset": 2048, 00:07:03.165 "data_size": 63488 00:07:03.165 } 00:07:03.165 ] 00:07:03.165 }' 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.165 02:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.425 [2024-11-28 02:22:37.043644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:03.425 "name": "Existed_Raid", 00:07:03.425 "aliases": [ 00:07:03.425 "bd29d4df-3f57-4eed-af9d-bbca73de90e4" 00:07:03.425 ], 00:07:03.425 "product_name": "Raid Volume", 00:07:03.425 "block_size": 512, 00:07:03.425 "num_blocks": 126976, 00:07:03.425 "uuid": "bd29d4df-3f57-4eed-af9d-bbca73de90e4", 00:07:03.425 "assigned_rate_limits": { 00:07:03.425 "rw_ios_per_sec": 0, 00:07:03.425 "rw_mbytes_per_sec": 0, 00:07:03.425 "r_mbytes_per_sec": 0, 00:07:03.425 "w_mbytes_per_sec": 0 00:07:03.425 }, 00:07:03.425 "claimed": false, 00:07:03.425 "zoned": false, 00:07:03.425 "supported_io_types": { 00:07:03.425 "read": true, 00:07:03.425 "write": true, 00:07:03.425 "unmap": true, 00:07:03.425 "flush": true, 00:07:03.425 "reset": true, 00:07:03.425 "nvme_admin": false, 00:07:03.425 "nvme_io": false, 00:07:03.425 "nvme_io_md": false, 00:07:03.425 "write_zeroes": true, 00:07:03.425 "zcopy": false, 00:07:03.425 "get_zone_info": false, 00:07:03.425 "zone_management": false, 00:07:03.425 "zone_append": false, 00:07:03.425 "compare": false, 00:07:03.425 "compare_and_write": false, 00:07:03.425 "abort": false, 00:07:03.425 "seek_hole": false, 00:07:03.425 "seek_data": false, 00:07:03.425 "copy": false, 00:07:03.425 "nvme_iov_md": false 00:07:03.425 }, 00:07:03.425 "memory_domains": [ 00:07:03.425 { 00:07:03.425 "dma_device_id": "system", 00:07:03.425 "dma_device_type": 1 00:07:03.425 }, 00:07:03.425 { 00:07:03.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.425 "dma_device_type": 2 00:07:03.425 }, 00:07:03.425 { 00:07:03.425 "dma_device_id": "system", 00:07:03.425 "dma_device_type": 1 00:07:03.425 }, 00:07:03.425 { 00:07:03.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.425 "dma_device_type": 2 00:07:03.425 } 00:07:03.425 ], 00:07:03.425 "driver_specific": { 00:07:03.425 "raid": { 00:07:03.425 "uuid": "bd29d4df-3f57-4eed-af9d-bbca73de90e4", 00:07:03.425 "strip_size_kb": 64, 00:07:03.425 "state": "online", 00:07:03.425 "raid_level": "concat", 00:07:03.425 "superblock": true, 00:07:03.425 "num_base_bdevs": 2, 00:07:03.425 "num_base_bdevs_discovered": 2, 00:07:03.425 "num_base_bdevs_operational": 2, 00:07:03.425 "base_bdevs_list": [ 00:07:03.425 { 00:07:03.425 "name": "BaseBdev1", 00:07:03.425 "uuid": "5fa693a1-9158-4659-8af6-6761503a745c", 00:07:03.425 "is_configured": true, 00:07:03.425 "data_offset": 2048, 00:07:03.425 "data_size": 63488 00:07:03.425 }, 00:07:03.425 { 00:07:03.425 "name": "BaseBdev2", 00:07:03.425 "uuid": "1e9d5f23-8846-4d9f-8661-ee0b111424e1", 00:07:03.425 "is_configured": true, 00:07:03.425 "data_offset": 2048, 00:07:03.425 "data_size": 63488 00:07:03.425 } 00:07:03.425 ] 00:07:03.425 } 00:07:03.425 } 00:07:03.425 }' 00:07:03.425 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.685 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:03.685 BaseBdev2' 00:07:03.685 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.685 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.685 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.685 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:03.685 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.685 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.686 [2024-11-28 02:22:37.239085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:03.686 [2024-11-28 02:22:37.239122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.686 [2024-11-28 02:22:37.239172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.686 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.946 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.946 "name": "Existed_Raid", 00:07:03.946 "uuid": "bd29d4df-3f57-4eed-af9d-bbca73de90e4", 00:07:03.946 "strip_size_kb": 64, 00:07:03.946 "state": "offline", 00:07:03.946 "raid_level": "concat", 00:07:03.946 "superblock": true, 00:07:03.946 "num_base_bdevs": 2, 00:07:03.946 "num_base_bdevs_discovered": 1, 00:07:03.946 "num_base_bdevs_operational": 1, 00:07:03.946 "base_bdevs_list": [ 00:07:03.946 { 00:07:03.946 "name": null, 00:07:03.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.946 "is_configured": false, 00:07:03.946 "data_offset": 0, 00:07:03.946 "data_size": 63488 00:07:03.946 }, 00:07:03.946 { 00:07:03.946 "name": "BaseBdev2", 00:07:03.946 "uuid": "1e9d5f23-8846-4d9f-8661-ee0b111424e1", 00:07:03.946 "is_configured": true, 00:07:03.946 "data_offset": 2048, 00:07:03.946 "data_size": 63488 00:07:03.946 } 00:07:03.946 ] 00:07:03.946 }' 00:07:03.946 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.946 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.205 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:04.205 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:04.205 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:04.205 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.205 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.206 [2024-11-28 02:22:37.736737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:04.206 [2024-11-28 02:22:37.736794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61802 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61802 ']' 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61802 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:04.206 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.465 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61802 00:07:04.465 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.465 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.465 killing process with pid 61802 00:07:04.465 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61802' 00:07:04.465 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61802 00:07:04.465 [2024-11-28 02:22:37.906567] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.465 02:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61802 00:07:04.465 [2024-11-28 02:22:37.922569] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.405 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:05.405 00:07:05.405 real 0m4.631s 00:07:05.405 user 0m6.543s 00:07:05.405 sys 0m0.785s 00:07:05.405 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.405 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.405 ************************************ 00:07:05.405 END TEST raid_state_function_test_sb 00:07:05.405 ************************************ 00:07:05.405 02:22:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:05.405 02:22:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:05.405 02:22:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.405 02:22:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.665 ************************************ 00:07:05.665 START TEST raid_superblock_test 00:07:05.665 ************************************ 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62043 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62043 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62043 ']' 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.665 02:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.665 [2024-11-28 02:22:39.172124] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:05.665 [2024-11-28 02:22:39.172236] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62043 ] 00:07:05.665 [2024-11-28 02:22:39.342142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.925 [2024-11-28 02:22:39.452101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.185 [2024-11-28 02:22:39.649031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.185 [2024-11-28 02:22:39.649066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.445 malloc1 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.445 [2024-11-28 02:22:40.057211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:06.445 [2024-11-28 02:22:40.057280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.445 [2024-11-28 02:22:40.057301] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:06.445 [2024-11-28 02:22:40.057311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.445 [2024-11-28 02:22:40.059317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.445 [2024-11-28 02:22:40.059349] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:06.445 pt1 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.445 malloc2 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.445 [2024-11-28 02:22:40.114334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:06.445 [2024-11-28 02:22:40.114398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.445 [2024-11-28 02:22:40.114422] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:06.445 [2024-11-28 02:22:40.114431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.445 [2024-11-28 02:22:40.116617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.445 [2024-11-28 02:22:40.116652] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:06.445 pt2 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.445 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.705 [2024-11-28 02:22:40.126347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:06.705 [2024-11-28 02:22:40.128173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:06.705 [2024-11-28 02:22:40.128342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:06.705 [2024-11-28 02:22:40.128355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:06.705 [2024-11-28 02:22:40.128650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:06.705 [2024-11-28 02:22:40.128805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:06.705 [2024-11-28 02:22:40.128824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:06.705 [2024-11-28 02:22:40.128974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.705 "name": "raid_bdev1", 00:07:06.705 "uuid": "5119c410-3de9-4132-b459-b04ea2225def", 00:07:06.705 "strip_size_kb": 64, 00:07:06.705 "state": "online", 00:07:06.705 "raid_level": "concat", 00:07:06.705 "superblock": true, 00:07:06.705 "num_base_bdevs": 2, 00:07:06.705 "num_base_bdevs_discovered": 2, 00:07:06.705 "num_base_bdevs_operational": 2, 00:07:06.705 "base_bdevs_list": [ 00:07:06.705 { 00:07:06.705 "name": "pt1", 00:07:06.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:06.705 "is_configured": true, 00:07:06.705 "data_offset": 2048, 00:07:06.705 "data_size": 63488 00:07:06.705 }, 00:07:06.705 { 00:07:06.705 "name": "pt2", 00:07:06.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:06.705 "is_configured": true, 00:07:06.705 "data_offset": 2048, 00:07:06.705 "data_size": 63488 00:07:06.705 } 00:07:06.705 ] 00:07:06.705 }' 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.705 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.964 [2024-11-28 02:22:40.557841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.964 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:06.964 "name": "raid_bdev1", 00:07:06.964 "aliases": [ 00:07:06.964 "5119c410-3de9-4132-b459-b04ea2225def" 00:07:06.964 ], 00:07:06.964 "product_name": "Raid Volume", 00:07:06.964 "block_size": 512, 00:07:06.964 "num_blocks": 126976, 00:07:06.964 "uuid": "5119c410-3de9-4132-b459-b04ea2225def", 00:07:06.964 "assigned_rate_limits": { 00:07:06.964 "rw_ios_per_sec": 0, 00:07:06.964 "rw_mbytes_per_sec": 0, 00:07:06.964 "r_mbytes_per_sec": 0, 00:07:06.964 "w_mbytes_per_sec": 0 00:07:06.964 }, 00:07:06.964 "claimed": false, 00:07:06.964 "zoned": false, 00:07:06.964 "supported_io_types": { 00:07:06.964 "read": true, 00:07:06.964 "write": true, 00:07:06.964 "unmap": true, 00:07:06.964 "flush": true, 00:07:06.964 "reset": true, 00:07:06.964 "nvme_admin": false, 00:07:06.964 "nvme_io": false, 00:07:06.964 "nvme_io_md": false, 00:07:06.964 "write_zeroes": true, 00:07:06.964 "zcopy": false, 00:07:06.964 "get_zone_info": false, 00:07:06.964 "zone_management": false, 00:07:06.964 "zone_append": false, 00:07:06.964 "compare": false, 00:07:06.964 "compare_and_write": false, 00:07:06.964 "abort": false, 00:07:06.964 "seek_hole": false, 00:07:06.964 "seek_data": false, 00:07:06.964 "copy": false, 00:07:06.964 "nvme_iov_md": false 00:07:06.964 }, 00:07:06.964 "memory_domains": [ 00:07:06.964 { 00:07:06.964 "dma_device_id": "system", 00:07:06.964 "dma_device_type": 1 00:07:06.964 }, 00:07:06.964 { 00:07:06.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.965 "dma_device_type": 2 00:07:06.965 }, 00:07:06.965 { 00:07:06.965 "dma_device_id": "system", 00:07:06.965 "dma_device_type": 1 00:07:06.965 }, 00:07:06.965 { 00:07:06.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.965 "dma_device_type": 2 00:07:06.965 } 00:07:06.965 ], 00:07:06.965 "driver_specific": { 00:07:06.965 "raid": { 00:07:06.965 "uuid": "5119c410-3de9-4132-b459-b04ea2225def", 00:07:06.965 "strip_size_kb": 64, 00:07:06.965 "state": "online", 00:07:06.965 "raid_level": "concat", 00:07:06.965 "superblock": true, 00:07:06.965 "num_base_bdevs": 2, 00:07:06.965 "num_base_bdevs_discovered": 2, 00:07:06.965 "num_base_bdevs_operational": 2, 00:07:06.965 "base_bdevs_list": [ 00:07:06.965 { 00:07:06.965 "name": "pt1", 00:07:06.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:06.965 "is_configured": true, 00:07:06.965 "data_offset": 2048, 00:07:06.965 "data_size": 63488 00:07:06.965 }, 00:07:06.965 { 00:07:06.965 "name": "pt2", 00:07:06.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:06.965 "is_configured": true, 00:07:06.965 "data_offset": 2048, 00:07:06.965 "data_size": 63488 00:07:06.965 } 00:07:06.965 ] 00:07:06.965 } 00:07:06.965 } 00:07:06.965 }' 00:07:06.965 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:06.965 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:06.965 pt2' 00:07:06.965 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.224 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:07.224 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.224 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.224 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:07.224 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.225 [2024-11-28 02:22:40.781391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5119c410-3de9-4132-b459-b04ea2225def 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5119c410-3de9-4132-b459-b04ea2225def ']' 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.225 [2024-11-28 02:22:40.829046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:07.225 [2024-11-28 02:22:40.829071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.225 [2024-11-28 02:22:40.829150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.225 [2024-11-28 02:22:40.829198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.225 [2024-11-28 02:22:40.829210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.225 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.484 [2024-11-28 02:22:40.972839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:07.484 [2024-11-28 02:22:40.974699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:07.484 [2024-11-28 02:22:40.974767] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:07.484 [2024-11-28 02:22:40.974811] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:07.484 [2024-11-28 02:22:40.974825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:07.484 [2024-11-28 02:22:40.974835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:07.484 request: 00:07:07.484 { 00:07:07.484 "name": "raid_bdev1", 00:07:07.484 "raid_level": "concat", 00:07:07.484 "base_bdevs": [ 00:07:07.484 "malloc1", 00:07:07.484 "malloc2" 00:07:07.484 ], 00:07:07.484 "strip_size_kb": 64, 00:07:07.484 "superblock": false, 00:07:07.484 "method": "bdev_raid_create", 00:07:07.484 "req_id": 1 00:07:07.484 } 00:07:07.484 Got JSON-RPC error response 00:07:07.484 response: 00:07:07.484 { 00:07:07.484 "code": -17, 00:07:07.484 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:07.484 } 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.484 02:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.484 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:07.484 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.485 [2024-11-28 02:22:41.020736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:07.485 [2024-11-28 02:22:41.020778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.485 [2024-11-28 02:22:41.020792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:07.485 [2024-11-28 02:22:41.020802] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.485 [2024-11-28 02:22:41.022939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.485 [2024-11-28 02:22:41.022971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:07.485 [2024-11-28 02:22:41.023048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:07.485 [2024-11-28 02:22:41.023121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:07.485 pt1 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.485 "name": "raid_bdev1", 00:07:07.485 "uuid": "5119c410-3de9-4132-b459-b04ea2225def", 00:07:07.485 "strip_size_kb": 64, 00:07:07.485 "state": "configuring", 00:07:07.485 "raid_level": "concat", 00:07:07.485 "superblock": true, 00:07:07.485 "num_base_bdevs": 2, 00:07:07.485 "num_base_bdevs_discovered": 1, 00:07:07.485 "num_base_bdevs_operational": 2, 00:07:07.485 "base_bdevs_list": [ 00:07:07.485 { 00:07:07.485 "name": "pt1", 00:07:07.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:07.485 "is_configured": true, 00:07:07.485 "data_offset": 2048, 00:07:07.485 "data_size": 63488 00:07:07.485 }, 00:07:07.485 { 00:07:07.485 "name": null, 00:07:07.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:07.485 "is_configured": false, 00:07:07.485 "data_offset": 2048, 00:07:07.485 "data_size": 63488 00:07:07.485 } 00:07:07.485 ] 00:07:07.485 }' 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.485 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.053 [2024-11-28 02:22:41.456060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:08.053 [2024-11-28 02:22:41.456128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.053 [2024-11-28 02:22:41.456151] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:08.053 [2024-11-28 02:22:41.456162] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.053 [2024-11-28 02:22:41.456628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.053 [2024-11-28 02:22:41.456648] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:08.053 [2024-11-28 02:22:41.456725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:08.053 [2024-11-28 02:22:41.456752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:08.053 [2024-11-28 02:22:41.456864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:08.053 [2024-11-28 02:22:41.456875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:08.053 [2024-11-28 02:22:41.457137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:08.053 [2024-11-28 02:22:41.457300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:08.053 [2024-11-28 02:22:41.457316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:08.053 [2024-11-28 02:22:41.457446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.053 pt2 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.053 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.054 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.054 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.054 "name": "raid_bdev1", 00:07:08.054 "uuid": "5119c410-3de9-4132-b459-b04ea2225def", 00:07:08.054 "strip_size_kb": 64, 00:07:08.054 "state": "online", 00:07:08.054 "raid_level": "concat", 00:07:08.054 "superblock": true, 00:07:08.054 "num_base_bdevs": 2, 00:07:08.054 "num_base_bdevs_discovered": 2, 00:07:08.054 "num_base_bdevs_operational": 2, 00:07:08.054 "base_bdevs_list": [ 00:07:08.054 { 00:07:08.054 "name": "pt1", 00:07:08.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.054 "is_configured": true, 00:07:08.054 "data_offset": 2048, 00:07:08.054 "data_size": 63488 00:07:08.054 }, 00:07:08.054 { 00:07:08.054 "name": "pt2", 00:07:08.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.054 "is_configured": true, 00:07:08.054 "data_offset": 2048, 00:07:08.054 "data_size": 63488 00:07:08.054 } 00:07:08.054 ] 00:07:08.054 }' 00:07:08.054 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.054 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.314 [2024-11-28 02:22:41.819641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:08.314 "name": "raid_bdev1", 00:07:08.314 "aliases": [ 00:07:08.314 "5119c410-3de9-4132-b459-b04ea2225def" 00:07:08.314 ], 00:07:08.314 "product_name": "Raid Volume", 00:07:08.314 "block_size": 512, 00:07:08.314 "num_blocks": 126976, 00:07:08.314 "uuid": "5119c410-3de9-4132-b459-b04ea2225def", 00:07:08.314 "assigned_rate_limits": { 00:07:08.314 "rw_ios_per_sec": 0, 00:07:08.314 "rw_mbytes_per_sec": 0, 00:07:08.314 "r_mbytes_per_sec": 0, 00:07:08.314 "w_mbytes_per_sec": 0 00:07:08.314 }, 00:07:08.314 "claimed": false, 00:07:08.314 "zoned": false, 00:07:08.314 "supported_io_types": { 00:07:08.314 "read": true, 00:07:08.314 "write": true, 00:07:08.314 "unmap": true, 00:07:08.314 "flush": true, 00:07:08.314 "reset": true, 00:07:08.314 "nvme_admin": false, 00:07:08.314 "nvme_io": false, 00:07:08.314 "nvme_io_md": false, 00:07:08.314 "write_zeroes": true, 00:07:08.314 "zcopy": false, 00:07:08.314 "get_zone_info": false, 00:07:08.314 "zone_management": false, 00:07:08.314 "zone_append": false, 00:07:08.314 "compare": false, 00:07:08.314 "compare_and_write": false, 00:07:08.314 "abort": false, 00:07:08.314 "seek_hole": false, 00:07:08.314 "seek_data": false, 00:07:08.314 "copy": false, 00:07:08.314 "nvme_iov_md": false 00:07:08.314 }, 00:07:08.314 "memory_domains": [ 00:07:08.314 { 00:07:08.314 "dma_device_id": "system", 00:07:08.314 "dma_device_type": 1 00:07:08.314 }, 00:07:08.314 { 00:07:08.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.314 "dma_device_type": 2 00:07:08.314 }, 00:07:08.314 { 00:07:08.314 "dma_device_id": "system", 00:07:08.314 "dma_device_type": 1 00:07:08.314 }, 00:07:08.314 { 00:07:08.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.314 "dma_device_type": 2 00:07:08.314 } 00:07:08.314 ], 00:07:08.314 "driver_specific": { 00:07:08.314 "raid": { 00:07:08.314 "uuid": "5119c410-3de9-4132-b459-b04ea2225def", 00:07:08.314 "strip_size_kb": 64, 00:07:08.314 "state": "online", 00:07:08.314 "raid_level": "concat", 00:07:08.314 "superblock": true, 00:07:08.314 "num_base_bdevs": 2, 00:07:08.314 "num_base_bdevs_discovered": 2, 00:07:08.314 "num_base_bdevs_operational": 2, 00:07:08.314 "base_bdevs_list": [ 00:07:08.314 { 00:07:08.314 "name": "pt1", 00:07:08.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.314 "is_configured": true, 00:07:08.314 "data_offset": 2048, 00:07:08.314 "data_size": 63488 00:07:08.314 }, 00:07:08.314 { 00:07:08.314 "name": "pt2", 00:07:08.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.314 "is_configured": true, 00:07:08.314 "data_offset": 2048, 00:07:08.314 "data_size": 63488 00:07:08.314 } 00:07:08.314 ] 00:07:08.314 } 00:07:08.314 } 00:07:08.314 }' 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:08.314 pt2' 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.314 02:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.574 [2024-11-28 02:22:42.027303] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5119c410-3de9-4132-b459-b04ea2225def '!=' 5119c410-3de9-4132-b459-b04ea2225def ']' 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62043 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62043 ']' 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62043 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62043 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.574 killing process with pid 62043 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62043' 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62043 00:07:08.574 [2024-11-28 02:22:42.076508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.574 [2024-11-28 02:22:42.076598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.574 [2024-11-28 02:22:42.076654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.574 [2024-11-28 02:22:42.076667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:08.574 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62043 00:07:08.835 [2024-11-28 02:22:42.272848] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.774 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:09.774 00:07:09.774 real 0m4.263s 00:07:09.774 user 0m5.926s 00:07:09.774 sys 0m0.714s 00:07:09.774 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.774 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.774 ************************************ 00:07:09.774 END TEST raid_superblock_test 00:07:09.774 ************************************ 00:07:09.774 02:22:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:09.774 02:22:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:09.774 02:22:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.774 02:22:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.774 ************************************ 00:07:09.774 START TEST raid_read_error_test 00:07:09.774 ************************************ 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:09.774 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1XhtmeCcFr 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62250 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62250 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62250 ']' 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.775 02:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.034 [2024-11-28 02:22:43.520494] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:10.034 [2024-11-28 02:22:43.520616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62250 ] 00:07:10.034 [2024-11-28 02:22:43.689819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.294 [2024-11-28 02:22:43.798307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.554 [2024-11-28 02:22:43.990814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.554 [2024-11-28 02:22:43.990863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.816 BaseBdev1_malloc 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.816 true 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.816 [2024-11-28 02:22:44.398391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:10.816 [2024-11-28 02:22:44.398450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.816 [2024-11-28 02:22:44.398471] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:10.816 [2024-11-28 02:22:44.398482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.816 [2024-11-28 02:22:44.400625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.816 [2024-11-28 02:22:44.400667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:10.816 BaseBdev1 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.816 BaseBdev2_malloc 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.816 true 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.816 [2024-11-28 02:22:44.464036] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:10.816 [2024-11-28 02:22:44.464098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.816 [2024-11-28 02:22:44.464118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:10.816 [2024-11-28 02:22:44.464129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.816 [2024-11-28 02:22:44.466259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.816 [2024-11-28 02:22:44.466378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:10.816 BaseBdev2 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.816 [2024-11-28 02:22:44.476103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.816 [2024-11-28 02:22:44.477915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.816 [2024-11-28 02:22:44.478141] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:10.816 [2024-11-28 02:22:44.478162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:10.816 [2024-11-28 02:22:44.478432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:10.816 [2024-11-28 02:22:44.478612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:10.816 [2024-11-28 02:22:44.478631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:10.816 [2024-11-28 02:22:44.478811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.816 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.077 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.077 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.077 "name": "raid_bdev1", 00:07:11.077 "uuid": "7a7332fa-0e7c-4401-8a3f-edf4edfb57bf", 00:07:11.077 "strip_size_kb": 64, 00:07:11.077 "state": "online", 00:07:11.077 "raid_level": "concat", 00:07:11.077 "superblock": true, 00:07:11.077 "num_base_bdevs": 2, 00:07:11.077 "num_base_bdevs_discovered": 2, 00:07:11.077 "num_base_bdevs_operational": 2, 00:07:11.077 "base_bdevs_list": [ 00:07:11.077 { 00:07:11.077 "name": "BaseBdev1", 00:07:11.077 "uuid": "1b6e0036-1fcd-5df2-9770-1b094fa04f36", 00:07:11.077 "is_configured": true, 00:07:11.077 "data_offset": 2048, 00:07:11.077 "data_size": 63488 00:07:11.077 }, 00:07:11.077 { 00:07:11.077 "name": "BaseBdev2", 00:07:11.077 "uuid": "6c6893c4-015a-582c-a99f-631e31e4e150", 00:07:11.077 "is_configured": true, 00:07:11.077 "data_offset": 2048, 00:07:11.077 "data_size": 63488 00:07:11.077 } 00:07:11.077 ] 00:07:11.077 }' 00:07:11.077 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.077 02:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.337 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:11.337 02:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:11.337 [2024-11-28 02:22:44.976704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.332 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.333 "name": "raid_bdev1", 00:07:12.333 "uuid": "7a7332fa-0e7c-4401-8a3f-edf4edfb57bf", 00:07:12.333 "strip_size_kb": 64, 00:07:12.333 "state": "online", 00:07:12.333 "raid_level": "concat", 00:07:12.333 "superblock": true, 00:07:12.333 "num_base_bdevs": 2, 00:07:12.333 "num_base_bdevs_discovered": 2, 00:07:12.333 "num_base_bdevs_operational": 2, 00:07:12.333 "base_bdevs_list": [ 00:07:12.333 { 00:07:12.333 "name": "BaseBdev1", 00:07:12.333 "uuid": "1b6e0036-1fcd-5df2-9770-1b094fa04f36", 00:07:12.333 "is_configured": true, 00:07:12.333 "data_offset": 2048, 00:07:12.333 "data_size": 63488 00:07:12.333 }, 00:07:12.333 { 00:07:12.333 "name": "BaseBdev2", 00:07:12.333 "uuid": "6c6893c4-015a-582c-a99f-631e31e4e150", 00:07:12.333 "is_configured": true, 00:07:12.333 "data_offset": 2048, 00:07:12.333 "data_size": 63488 00:07:12.333 } 00:07:12.333 ] 00:07:12.333 }' 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.333 02:22:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.903 [2024-11-28 02:22:46.310715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:12.903 [2024-11-28 02:22:46.310828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.903 [2024-11-28 02:22:46.313610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.903 [2024-11-28 02:22:46.313692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.903 [2024-11-28 02:22:46.313742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.903 [2024-11-28 02:22:46.313784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:12.903 { 00:07:12.903 "results": [ 00:07:12.903 { 00:07:12.903 "job": "raid_bdev1", 00:07:12.903 "core_mask": "0x1", 00:07:12.903 "workload": "randrw", 00:07:12.903 "percentage": 50, 00:07:12.903 "status": "finished", 00:07:12.903 "queue_depth": 1, 00:07:12.903 "io_size": 131072, 00:07:12.903 "runtime": 1.335066, 00:07:12.903 "iops": 16575.210513937138, 00:07:12.903 "mibps": 2071.9013142421422, 00:07:12.903 "io_failed": 1, 00:07:12.903 "io_timeout": 0, 00:07:12.903 "avg_latency_us": 83.22020360040017, 00:07:12.903 "min_latency_us": 24.705676855895195, 00:07:12.903 "max_latency_us": 1366.5257641921398 00:07:12.903 } 00:07:12.903 ], 00:07:12.903 "core_count": 1 00:07:12.903 } 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62250 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62250 ']' 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62250 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62250 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62250' 00:07:12.903 killing process with pid 62250 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62250 00:07:12.903 [2024-11-28 02:22:46.363098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.903 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62250 00:07:12.903 [2024-11-28 02:22:46.493842] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.285 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1XhtmeCcFr 00:07:14.286 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:14.286 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:14.286 ************************************ 00:07:14.286 END TEST raid_read_error_test 00:07:14.286 ************************************ 00:07:14.286 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:14.286 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:14.286 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:14.286 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:14.286 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:14.286 00:07:14.286 real 0m4.229s 00:07:14.286 user 0m5.008s 00:07:14.286 sys 0m0.531s 00:07:14.286 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.286 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.286 02:22:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:14.286 02:22:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:14.286 02:22:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.286 02:22:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.286 ************************************ 00:07:14.286 START TEST raid_write_error_test 00:07:14.286 ************************************ 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wYo2Buuu1p 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62395 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62395 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62395 ']' 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.286 02:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.286 [2024-11-28 02:22:47.816585] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:14.286 [2024-11-28 02:22:47.817246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62395 ] 00:07:14.546 [2024-11-28 02:22:47.990693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.546 [2024-11-28 02:22:48.093295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.806 [2024-11-28 02:22:48.286945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.806 [2024-11-28 02:22:48.286978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.066 BaseBdev1_malloc 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.066 true 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.066 [2024-11-28 02:22:48.700690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:15.066 [2024-11-28 02:22:48.700746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.066 [2024-11-28 02:22:48.700763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:15.066 [2024-11-28 02:22:48.700773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.066 [2024-11-28 02:22:48.702835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.066 [2024-11-28 02:22:48.702873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:15.066 BaseBdev1 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.066 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.326 BaseBdev2_malloc 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.327 true 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.327 [2024-11-28 02:22:48.764791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:15.327 [2024-11-28 02:22:48.764906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.327 [2024-11-28 02:22:48.764939] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:15.327 [2024-11-28 02:22:48.764950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.327 [2024-11-28 02:22:48.767041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.327 [2024-11-28 02:22:48.767075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:15.327 BaseBdev2 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.327 [2024-11-28 02:22:48.776829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.327 [2024-11-28 02:22:48.778624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.327 [2024-11-28 02:22:48.778802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.327 [2024-11-28 02:22:48.778817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.327 [2024-11-28 02:22:48.779070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:15.327 [2024-11-28 02:22:48.779248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.327 [2024-11-28 02:22:48.779265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:15.327 [2024-11-28 02:22:48.779416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.327 "name": "raid_bdev1", 00:07:15.327 "uuid": "23cf81f2-67f6-4484-a937-b025355dd6ef", 00:07:15.327 "strip_size_kb": 64, 00:07:15.327 "state": "online", 00:07:15.327 "raid_level": "concat", 00:07:15.327 "superblock": true, 00:07:15.327 "num_base_bdevs": 2, 00:07:15.327 "num_base_bdevs_discovered": 2, 00:07:15.327 "num_base_bdevs_operational": 2, 00:07:15.327 "base_bdevs_list": [ 00:07:15.327 { 00:07:15.327 "name": "BaseBdev1", 00:07:15.327 "uuid": "ebadf587-22e6-5951-889c-858f91361a0d", 00:07:15.327 "is_configured": true, 00:07:15.327 "data_offset": 2048, 00:07:15.327 "data_size": 63488 00:07:15.327 }, 00:07:15.327 { 00:07:15.327 "name": "BaseBdev2", 00:07:15.327 "uuid": "0b9e7531-046f-513e-9608-5ef2cc76ae74", 00:07:15.327 "is_configured": true, 00:07:15.327 "data_offset": 2048, 00:07:15.327 "data_size": 63488 00:07:15.327 } 00:07:15.327 ] 00:07:15.327 }' 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.327 02:22:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.588 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:15.588 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:15.848 [2024-11-28 02:22:49.293199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.787 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.787 "name": "raid_bdev1", 00:07:16.787 "uuid": "23cf81f2-67f6-4484-a937-b025355dd6ef", 00:07:16.787 "strip_size_kb": 64, 00:07:16.787 "state": "online", 00:07:16.787 "raid_level": "concat", 00:07:16.787 "superblock": true, 00:07:16.787 "num_base_bdevs": 2, 00:07:16.787 "num_base_bdevs_discovered": 2, 00:07:16.787 "num_base_bdevs_operational": 2, 00:07:16.787 "base_bdevs_list": [ 00:07:16.787 { 00:07:16.787 "name": "BaseBdev1", 00:07:16.787 "uuid": "ebadf587-22e6-5951-889c-858f91361a0d", 00:07:16.787 "is_configured": true, 00:07:16.787 "data_offset": 2048, 00:07:16.787 "data_size": 63488 00:07:16.787 }, 00:07:16.787 { 00:07:16.787 "name": "BaseBdev2", 00:07:16.788 "uuid": "0b9e7531-046f-513e-9608-5ef2cc76ae74", 00:07:16.788 "is_configured": true, 00:07:16.788 "data_offset": 2048, 00:07:16.788 "data_size": 63488 00:07:16.788 } 00:07:16.788 ] 00:07:16.788 }' 00:07:16.788 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.788 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.047 [2024-11-28 02:22:50.657163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:17.047 [2024-11-28 02:22:50.657201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.047 [2024-11-28 02:22:50.659904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.047 [2024-11-28 02:22:50.659959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.047 [2024-11-28 02:22:50.659991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.047 [2024-11-28 02:22:50.660003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:17.047 { 00:07:17.047 "results": [ 00:07:17.047 { 00:07:17.047 "job": "raid_bdev1", 00:07:17.047 "core_mask": "0x1", 00:07:17.047 "workload": "randrw", 00:07:17.047 "percentage": 50, 00:07:17.047 "status": "finished", 00:07:17.047 "queue_depth": 1, 00:07:17.047 "io_size": 131072, 00:07:17.047 "runtime": 1.364919, 00:07:17.047 "iops": 16396.577379317016, 00:07:17.047 "mibps": 2049.572172414627, 00:07:17.047 "io_failed": 1, 00:07:17.047 "io_timeout": 0, 00:07:17.047 "avg_latency_us": 84.23160864964805, 00:07:17.047 "min_latency_us": 25.9353711790393, 00:07:17.047 "max_latency_us": 1380.8349344978167 00:07:17.047 } 00:07:17.047 ], 00:07:17.047 "core_count": 1 00:07:17.047 } 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62395 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62395 ']' 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62395 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62395 00:07:17.047 killing process with pid 62395 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62395' 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62395 00:07:17.047 [2024-11-28 02:22:50.705595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.047 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62395 00:07:17.307 [2024-11-28 02:22:50.839238] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.689 02:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wYo2Buuu1p 00:07:18.689 02:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:18.689 02:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:18.690 02:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:18.690 02:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:18.690 02:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:18.690 02:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:18.690 02:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:18.690 00:07:18.690 real 0m4.284s 00:07:18.690 user 0m5.128s 00:07:18.690 sys 0m0.521s 00:07:18.690 02:22:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.690 02:22:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.690 ************************************ 00:07:18.690 END TEST raid_write_error_test 00:07:18.690 ************************************ 00:07:18.690 02:22:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:18.690 02:22:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:18.690 02:22:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:18.690 02:22:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.690 02:22:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.690 ************************************ 00:07:18.690 START TEST raid_state_function_test 00:07:18.690 ************************************ 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62533 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62533' 00:07:18.690 Process raid pid: 62533 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62533 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62533 ']' 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.690 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.690 [2024-11-28 02:22:52.162182] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:18.690 [2024-11-28 02:22:52.162291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.690 [2024-11-28 02:22:52.333528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.950 [2024-11-28 02:22:52.440764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.210 [2024-11-28 02:22:52.635378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.210 [2024-11-28 02:22:52.635414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.470 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.470 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:19.470 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.470 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.470 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.470 [2024-11-28 02:22:52.999742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.470 [2024-11-28 02:22:52.999801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.470 [2024-11-28 02:22:52.999812] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.471 [2024-11-28 02:22:52.999821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.471 "name": "Existed_Raid", 00:07:19.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.471 "strip_size_kb": 0, 00:07:19.471 "state": "configuring", 00:07:19.471 "raid_level": "raid1", 00:07:19.471 "superblock": false, 00:07:19.471 "num_base_bdevs": 2, 00:07:19.471 "num_base_bdevs_discovered": 0, 00:07:19.471 "num_base_bdevs_operational": 2, 00:07:19.471 "base_bdevs_list": [ 00:07:19.471 { 00:07:19.471 "name": "BaseBdev1", 00:07:19.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.471 "is_configured": false, 00:07:19.471 "data_offset": 0, 00:07:19.471 "data_size": 0 00:07:19.471 }, 00:07:19.471 { 00:07:19.471 "name": "BaseBdev2", 00:07:19.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.471 "is_configured": false, 00:07:19.471 "data_offset": 0, 00:07:19.471 "data_size": 0 00:07:19.471 } 00:07:19.471 ] 00:07:19.471 }' 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.471 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.730 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.730 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.730 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.730 [2024-11-28 02:22:53.407030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.730 [2024-11-28 02:22:53.407064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.992 [2024-11-28 02:22:53.419005] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.992 [2024-11-28 02:22:53.419046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.992 [2024-11-28 02:22:53.419055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.992 [2024-11-28 02:22:53.419066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.992 [2024-11-28 02:22:53.462868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.992 BaseBdev1 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.992 [ 00:07:19.992 { 00:07:19.992 "name": "BaseBdev1", 00:07:19.992 "aliases": [ 00:07:19.992 "0e1fd7ba-3a88-4bc4-91c2-75d1f48157da" 00:07:19.992 ], 00:07:19.992 "product_name": "Malloc disk", 00:07:19.992 "block_size": 512, 00:07:19.992 "num_blocks": 65536, 00:07:19.992 "uuid": "0e1fd7ba-3a88-4bc4-91c2-75d1f48157da", 00:07:19.992 "assigned_rate_limits": { 00:07:19.992 "rw_ios_per_sec": 0, 00:07:19.992 "rw_mbytes_per_sec": 0, 00:07:19.992 "r_mbytes_per_sec": 0, 00:07:19.992 "w_mbytes_per_sec": 0 00:07:19.992 }, 00:07:19.992 "claimed": true, 00:07:19.992 "claim_type": "exclusive_write", 00:07:19.992 "zoned": false, 00:07:19.992 "supported_io_types": { 00:07:19.992 "read": true, 00:07:19.992 "write": true, 00:07:19.992 "unmap": true, 00:07:19.992 "flush": true, 00:07:19.992 "reset": true, 00:07:19.992 "nvme_admin": false, 00:07:19.992 "nvme_io": false, 00:07:19.992 "nvme_io_md": false, 00:07:19.992 "write_zeroes": true, 00:07:19.992 "zcopy": true, 00:07:19.992 "get_zone_info": false, 00:07:19.992 "zone_management": false, 00:07:19.992 "zone_append": false, 00:07:19.992 "compare": false, 00:07:19.992 "compare_and_write": false, 00:07:19.992 "abort": true, 00:07:19.992 "seek_hole": false, 00:07:19.992 "seek_data": false, 00:07:19.992 "copy": true, 00:07:19.992 "nvme_iov_md": false 00:07:19.992 }, 00:07:19.992 "memory_domains": [ 00:07:19.992 { 00:07:19.992 "dma_device_id": "system", 00:07:19.992 "dma_device_type": 1 00:07:19.992 }, 00:07:19.992 { 00:07:19.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.992 "dma_device_type": 2 00:07:19.992 } 00:07:19.992 ], 00:07:19.992 "driver_specific": {} 00:07:19.992 } 00:07:19.992 ] 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.992 "name": "Existed_Raid", 00:07:19.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.992 "strip_size_kb": 0, 00:07:19.992 "state": "configuring", 00:07:19.992 "raid_level": "raid1", 00:07:19.992 "superblock": false, 00:07:19.992 "num_base_bdevs": 2, 00:07:19.992 "num_base_bdevs_discovered": 1, 00:07:19.992 "num_base_bdevs_operational": 2, 00:07:19.992 "base_bdevs_list": [ 00:07:19.992 { 00:07:19.992 "name": "BaseBdev1", 00:07:19.992 "uuid": "0e1fd7ba-3a88-4bc4-91c2-75d1f48157da", 00:07:19.992 "is_configured": true, 00:07:19.992 "data_offset": 0, 00:07:19.992 "data_size": 65536 00:07:19.992 }, 00:07:19.992 { 00:07:19.992 "name": "BaseBdev2", 00:07:19.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.992 "is_configured": false, 00:07:19.992 "data_offset": 0, 00:07:19.992 "data_size": 0 00:07:19.992 } 00:07:19.992 ] 00:07:19.992 }' 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.992 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.252 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.252 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.252 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.253 [2024-11-28 02:22:53.910128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.253 [2024-11-28 02:22:53.910178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.253 [2024-11-28 02:22:53.922131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.253 [2024-11-28 02:22:53.923901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.253 [2024-11-28 02:22:53.923954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.253 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.513 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.513 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.513 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.513 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.513 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.513 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.513 "name": "Existed_Raid", 00:07:20.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.513 "strip_size_kb": 0, 00:07:20.513 "state": "configuring", 00:07:20.513 "raid_level": "raid1", 00:07:20.513 "superblock": false, 00:07:20.513 "num_base_bdevs": 2, 00:07:20.513 "num_base_bdevs_discovered": 1, 00:07:20.513 "num_base_bdevs_operational": 2, 00:07:20.513 "base_bdevs_list": [ 00:07:20.513 { 00:07:20.513 "name": "BaseBdev1", 00:07:20.513 "uuid": "0e1fd7ba-3a88-4bc4-91c2-75d1f48157da", 00:07:20.513 "is_configured": true, 00:07:20.513 "data_offset": 0, 00:07:20.513 "data_size": 65536 00:07:20.513 }, 00:07:20.513 { 00:07:20.513 "name": "BaseBdev2", 00:07:20.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.513 "is_configured": false, 00:07:20.513 "data_offset": 0, 00:07:20.513 "data_size": 0 00:07:20.513 } 00:07:20.513 ] 00:07:20.513 }' 00:07:20.513 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.513 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.772 [2024-11-28 02:22:54.435853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.772 [2024-11-28 02:22:54.435928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:20.772 [2024-11-28 02:22:54.435954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:20.772 [2024-11-28 02:22:54.436218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:20.772 [2024-11-28 02:22:54.436396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:20.772 [2024-11-28 02:22:54.436415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:20.772 [2024-11-28 02:22:54.436684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.772 BaseBdev2 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:20.772 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.773 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 [ 00:07:21.033 { 00:07:21.033 "name": "BaseBdev2", 00:07:21.033 "aliases": [ 00:07:21.033 "061b2906-4247-4ef8-b7b1-cef7252f52ab" 00:07:21.033 ], 00:07:21.033 "product_name": "Malloc disk", 00:07:21.033 "block_size": 512, 00:07:21.033 "num_blocks": 65536, 00:07:21.033 "uuid": "061b2906-4247-4ef8-b7b1-cef7252f52ab", 00:07:21.033 "assigned_rate_limits": { 00:07:21.033 "rw_ios_per_sec": 0, 00:07:21.033 "rw_mbytes_per_sec": 0, 00:07:21.033 "r_mbytes_per_sec": 0, 00:07:21.033 "w_mbytes_per_sec": 0 00:07:21.033 }, 00:07:21.033 "claimed": true, 00:07:21.033 "claim_type": "exclusive_write", 00:07:21.033 "zoned": false, 00:07:21.033 "supported_io_types": { 00:07:21.033 "read": true, 00:07:21.033 "write": true, 00:07:21.033 "unmap": true, 00:07:21.033 "flush": true, 00:07:21.033 "reset": true, 00:07:21.033 "nvme_admin": false, 00:07:21.033 "nvme_io": false, 00:07:21.033 "nvme_io_md": false, 00:07:21.033 "write_zeroes": true, 00:07:21.033 "zcopy": true, 00:07:21.033 "get_zone_info": false, 00:07:21.033 "zone_management": false, 00:07:21.033 "zone_append": false, 00:07:21.033 "compare": false, 00:07:21.033 "compare_and_write": false, 00:07:21.033 "abort": true, 00:07:21.033 "seek_hole": false, 00:07:21.033 "seek_data": false, 00:07:21.033 "copy": true, 00:07:21.033 "nvme_iov_md": false 00:07:21.033 }, 00:07:21.033 "memory_domains": [ 00:07:21.033 { 00:07:21.033 "dma_device_id": "system", 00:07:21.033 "dma_device_type": 1 00:07:21.033 }, 00:07:21.033 { 00:07:21.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.033 "dma_device_type": 2 00:07:21.033 } 00:07:21.033 ], 00:07:21.033 "driver_specific": {} 00:07:21.033 } 00:07:21.033 ] 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.033 "name": "Existed_Raid", 00:07:21.033 "uuid": "5fd9e31e-6383-4022-b5e0-ddbe0a85c1ef", 00:07:21.033 "strip_size_kb": 0, 00:07:21.033 "state": "online", 00:07:21.033 "raid_level": "raid1", 00:07:21.033 "superblock": false, 00:07:21.033 "num_base_bdevs": 2, 00:07:21.033 "num_base_bdevs_discovered": 2, 00:07:21.033 "num_base_bdevs_operational": 2, 00:07:21.033 "base_bdevs_list": [ 00:07:21.033 { 00:07:21.033 "name": "BaseBdev1", 00:07:21.033 "uuid": "0e1fd7ba-3a88-4bc4-91c2-75d1f48157da", 00:07:21.033 "is_configured": true, 00:07:21.033 "data_offset": 0, 00:07:21.033 "data_size": 65536 00:07:21.033 }, 00:07:21.033 { 00:07:21.033 "name": "BaseBdev2", 00:07:21.033 "uuid": "061b2906-4247-4ef8-b7b1-cef7252f52ab", 00:07:21.033 "is_configured": true, 00:07:21.033 "data_offset": 0, 00:07:21.033 "data_size": 65536 00:07:21.033 } 00:07:21.033 ] 00:07:21.033 }' 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.033 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.293 [2024-11-28 02:22:54.863476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.293 "name": "Existed_Raid", 00:07:21.293 "aliases": [ 00:07:21.293 "5fd9e31e-6383-4022-b5e0-ddbe0a85c1ef" 00:07:21.293 ], 00:07:21.293 "product_name": "Raid Volume", 00:07:21.293 "block_size": 512, 00:07:21.293 "num_blocks": 65536, 00:07:21.293 "uuid": "5fd9e31e-6383-4022-b5e0-ddbe0a85c1ef", 00:07:21.293 "assigned_rate_limits": { 00:07:21.293 "rw_ios_per_sec": 0, 00:07:21.293 "rw_mbytes_per_sec": 0, 00:07:21.293 "r_mbytes_per_sec": 0, 00:07:21.293 "w_mbytes_per_sec": 0 00:07:21.293 }, 00:07:21.293 "claimed": false, 00:07:21.293 "zoned": false, 00:07:21.293 "supported_io_types": { 00:07:21.293 "read": true, 00:07:21.293 "write": true, 00:07:21.293 "unmap": false, 00:07:21.293 "flush": false, 00:07:21.293 "reset": true, 00:07:21.293 "nvme_admin": false, 00:07:21.293 "nvme_io": false, 00:07:21.293 "nvme_io_md": false, 00:07:21.293 "write_zeroes": true, 00:07:21.293 "zcopy": false, 00:07:21.293 "get_zone_info": false, 00:07:21.293 "zone_management": false, 00:07:21.293 "zone_append": false, 00:07:21.293 "compare": false, 00:07:21.293 "compare_and_write": false, 00:07:21.293 "abort": false, 00:07:21.293 "seek_hole": false, 00:07:21.293 "seek_data": false, 00:07:21.293 "copy": false, 00:07:21.293 "nvme_iov_md": false 00:07:21.293 }, 00:07:21.293 "memory_domains": [ 00:07:21.293 { 00:07:21.293 "dma_device_id": "system", 00:07:21.293 "dma_device_type": 1 00:07:21.293 }, 00:07:21.293 { 00:07:21.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.293 "dma_device_type": 2 00:07:21.293 }, 00:07:21.293 { 00:07:21.293 "dma_device_id": "system", 00:07:21.293 "dma_device_type": 1 00:07:21.293 }, 00:07:21.293 { 00:07:21.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.293 "dma_device_type": 2 00:07:21.293 } 00:07:21.293 ], 00:07:21.293 "driver_specific": { 00:07:21.293 "raid": { 00:07:21.293 "uuid": "5fd9e31e-6383-4022-b5e0-ddbe0a85c1ef", 00:07:21.293 "strip_size_kb": 0, 00:07:21.293 "state": "online", 00:07:21.293 "raid_level": "raid1", 00:07:21.293 "superblock": false, 00:07:21.293 "num_base_bdevs": 2, 00:07:21.293 "num_base_bdevs_discovered": 2, 00:07:21.293 "num_base_bdevs_operational": 2, 00:07:21.293 "base_bdevs_list": [ 00:07:21.293 { 00:07:21.293 "name": "BaseBdev1", 00:07:21.293 "uuid": "0e1fd7ba-3a88-4bc4-91c2-75d1f48157da", 00:07:21.293 "is_configured": true, 00:07:21.293 "data_offset": 0, 00:07:21.293 "data_size": 65536 00:07:21.293 }, 00:07:21.293 { 00:07:21.293 "name": "BaseBdev2", 00:07:21.293 "uuid": "061b2906-4247-4ef8-b7b1-cef7252f52ab", 00:07:21.293 "is_configured": true, 00:07:21.293 "data_offset": 0, 00:07:21.293 "data_size": 65536 00:07:21.293 } 00:07:21.293 ] 00:07:21.293 } 00:07:21.293 } 00:07:21.293 }' 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.293 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:21.293 BaseBdev2' 00:07:21.294 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.554 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.554 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.554 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:21.554 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.554 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.554 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 [2024-11-28 02:22:55.070830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.554 "name": "Existed_Raid", 00:07:21.554 "uuid": "5fd9e31e-6383-4022-b5e0-ddbe0a85c1ef", 00:07:21.554 "strip_size_kb": 0, 00:07:21.554 "state": "online", 00:07:21.554 "raid_level": "raid1", 00:07:21.554 "superblock": false, 00:07:21.554 "num_base_bdevs": 2, 00:07:21.554 "num_base_bdevs_discovered": 1, 00:07:21.554 "num_base_bdevs_operational": 1, 00:07:21.554 "base_bdevs_list": [ 00:07:21.554 { 00:07:21.554 "name": null, 00:07:21.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.554 "is_configured": false, 00:07:21.554 "data_offset": 0, 00:07:21.554 "data_size": 65536 00:07:21.554 }, 00:07:21.554 { 00:07:21.554 "name": "BaseBdev2", 00:07:21.554 "uuid": "061b2906-4247-4ef8-b7b1-cef7252f52ab", 00:07:21.554 "is_configured": true, 00:07:21.554 "data_offset": 0, 00:07:21.554 "data_size": 65536 00:07:21.554 } 00:07:21.554 ] 00:07:21.554 }' 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.554 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.124 [2024-11-28 02:22:55.583057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:22.124 [2024-11-28 02:22:55.583150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.124 [2024-11-28 02:22:55.677360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.124 [2024-11-28 02:22:55.677427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.124 [2024-11-28 02:22:55.677439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62533 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62533 ']' 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62533 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62533 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.124 killing process with pid 62533 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62533' 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62533 00:07:22.124 [2024-11-28 02:22:55.769569] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.124 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62533 00:07:22.124 [2024-11-28 02:22:55.786202] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:23.507 00:07:23.507 real 0m4.812s 00:07:23.507 user 0m6.893s 00:07:23.507 sys 0m0.803s 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.507 ************************************ 00:07:23.507 END TEST raid_state_function_test 00:07:23.507 ************************************ 00:07:23.507 02:22:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:23.507 02:22:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:23.507 02:22:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.507 02:22:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.507 ************************************ 00:07:23.507 START TEST raid_state_function_test_sb 00:07:23.507 ************************************ 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62786 00:07:23.507 Process raid pid: 62786 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62786' 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62786 00:07:23.507 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62786 ']' 00:07:23.508 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.508 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.508 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.508 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.508 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.508 [2024-11-28 02:22:57.042966] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:23.508 [2024-11-28 02:22:57.043085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.767 [2024-11-28 02:22:57.214587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.767 [2024-11-28 02:22:57.327415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.027 [2024-11-28 02:22:57.521605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.027 [2024-11-28 02:22:57.521646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.287 [2024-11-28 02:22:57.890111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.287 [2024-11-28 02:22:57.890166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.287 [2024-11-28 02:22:57.890176] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.287 [2024-11-28 02:22:57.890186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.287 "name": "Existed_Raid", 00:07:24.287 "uuid": "c58602c6-f534-41d1-9af6-3ce351fa96bf", 00:07:24.287 "strip_size_kb": 0, 00:07:24.287 "state": "configuring", 00:07:24.287 "raid_level": "raid1", 00:07:24.287 "superblock": true, 00:07:24.287 "num_base_bdevs": 2, 00:07:24.287 "num_base_bdevs_discovered": 0, 00:07:24.287 "num_base_bdevs_operational": 2, 00:07:24.287 "base_bdevs_list": [ 00:07:24.287 { 00:07:24.287 "name": "BaseBdev1", 00:07:24.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.287 "is_configured": false, 00:07:24.287 "data_offset": 0, 00:07:24.287 "data_size": 0 00:07:24.287 }, 00:07:24.287 { 00:07:24.287 "name": "BaseBdev2", 00:07:24.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.287 "is_configured": false, 00:07:24.287 "data_offset": 0, 00:07:24.287 "data_size": 0 00:07:24.287 } 00:07:24.287 ] 00:07:24.287 }' 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.287 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.858 [2024-11-28 02:22:58.301374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:24.858 [2024-11-28 02:22:58.301415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.858 [2024-11-28 02:22:58.313324] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.858 [2024-11-28 02:22:58.313367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.858 [2024-11-28 02:22:58.313375] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.858 [2024-11-28 02:22:58.313402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.858 [2024-11-28 02:22:58.361411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.858 BaseBdev1 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.858 [ 00:07:24.858 { 00:07:24.858 "name": "BaseBdev1", 00:07:24.858 "aliases": [ 00:07:24.858 "5f11592b-5a3c-40d2-999e-f744de59c54e" 00:07:24.858 ], 00:07:24.858 "product_name": "Malloc disk", 00:07:24.858 "block_size": 512, 00:07:24.858 "num_blocks": 65536, 00:07:24.858 "uuid": "5f11592b-5a3c-40d2-999e-f744de59c54e", 00:07:24.858 "assigned_rate_limits": { 00:07:24.858 "rw_ios_per_sec": 0, 00:07:24.858 "rw_mbytes_per_sec": 0, 00:07:24.858 "r_mbytes_per_sec": 0, 00:07:24.858 "w_mbytes_per_sec": 0 00:07:24.858 }, 00:07:24.858 "claimed": true, 00:07:24.858 "claim_type": "exclusive_write", 00:07:24.858 "zoned": false, 00:07:24.858 "supported_io_types": { 00:07:24.858 "read": true, 00:07:24.858 "write": true, 00:07:24.858 "unmap": true, 00:07:24.858 "flush": true, 00:07:24.858 "reset": true, 00:07:24.858 "nvme_admin": false, 00:07:24.858 "nvme_io": false, 00:07:24.858 "nvme_io_md": false, 00:07:24.858 "write_zeroes": true, 00:07:24.858 "zcopy": true, 00:07:24.858 "get_zone_info": false, 00:07:24.858 "zone_management": false, 00:07:24.858 "zone_append": false, 00:07:24.858 "compare": false, 00:07:24.858 "compare_and_write": false, 00:07:24.858 "abort": true, 00:07:24.858 "seek_hole": false, 00:07:24.858 "seek_data": false, 00:07:24.858 "copy": true, 00:07:24.858 "nvme_iov_md": false 00:07:24.858 }, 00:07:24.858 "memory_domains": [ 00:07:24.858 { 00:07:24.858 "dma_device_id": "system", 00:07:24.858 "dma_device_type": 1 00:07:24.858 }, 00:07:24.858 { 00:07:24.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.858 "dma_device_type": 2 00:07:24.858 } 00:07:24.858 ], 00:07:24.858 "driver_specific": {} 00:07:24.858 } 00:07:24.858 ] 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.858 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.858 "name": "Existed_Raid", 00:07:24.859 "uuid": "fb53e4b7-3a6a-4cbc-9a33-8f4edf281eb4", 00:07:24.859 "strip_size_kb": 0, 00:07:24.859 "state": "configuring", 00:07:24.859 "raid_level": "raid1", 00:07:24.859 "superblock": true, 00:07:24.859 "num_base_bdevs": 2, 00:07:24.859 "num_base_bdevs_discovered": 1, 00:07:24.859 "num_base_bdevs_operational": 2, 00:07:24.859 "base_bdevs_list": [ 00:07:24.859 { 00:07:24.859 "name": "BaseBdev1", 00:07:24.859 "uuid": "5f11592b-5a3c-40d2-999e-f744de59c54e", 00:07:24.859 "is_configured": true, 00:07:24.859 "data_offset": 2048, 00:07:24.859 "data_size": 63488 00:07:24.859 }, 00:07:24.859 { 00:07:24.859 "name": "BaseBdev2", 00:07:24.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.859 "is_configured": false, 00:07:24.859 "data_offset": 0, 00:07:24.859 "data_size": 0 00:07:24.859 } 00:07:24.859 ] 00:07:24.859 }' 00:07:24.859 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.859 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.427 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.427 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.427 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.427 [2024-11-28 02:22:58.816689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.427 [2024-11-28 02:22:58.816751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:25.427 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.427 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.427 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.427 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.427 [2024-11-28 02:22:58.824710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.427 [2024-11-28 02:22:58.826570] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.427 [2024-11-28 02:22:58.826613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.427 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.427 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:25.427 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.428 "name": "Existed_Raid", 00:07:25.428 "uuid": "30e30454-0b50-4940-8ad0-f2ddf8189945", 00:07:25.428 "strip_size_kb": 0, 00:07:25.428 "state": "configuring", 00:07:25.428 "raid_level": "raid1", 00:07:25.428 "superblock": true, 00:07:25.428 "num_base_bdevs": 2, 00:07:25.428 "num_base_bdevs_discovered": 1, 00:07:25.428 "num_base_bdevs_operational": 2, 00:07:25.428 "base_bdevs_list": [ 00:07:25.428 { 00:07:25.428 "name": "BaseBdev1", 00:07:25.428 "uuid": "5f11592b-5a3c-40d2-999e-f744de59c54e", 00:07:25.428 "is_configured": true, 00:07:25.428 "data_offset": 2048, 00:07:25.428 "data_size": 63488 00:07:25.428 }, 00:07:25.428 { 00:07:25.428 "name": "BaseBdev2", 00:07:25.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.428 "is_configured": false, 00:07:25.428 "data_offset": 0, 00:07:25.428 "data_size": 0 00:07:25.428 } 00:07:25.428 ] 00:07:25.428 }' 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.428 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.687 [2024-11-28 02:22:59.246000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.687 [2024-11-28 02:22:59.246243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:25.687 [2024-11-28 02:22:59.246257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:25.687 [2024-11-28 02:22:59.246511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:25.687 [2024-11-28 02:22:59.246670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:25.687 [2024-11-28 02:22:59.246697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:25.687 BaseBdev2 00:07:25.687 [2024-11-28 02:22:59.246862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.687 [ 00:07:25.687 { 00:07:25.687 "name": "BaseBdev2", 00:07:25.687 "aliases": [ 00:07:25.687 "75291fe4-01ec-4a87-85ad-2f6676fa2305" 00:07:25.687 ], 00:07:25.687 "product_name": "Malloc disk", 00:07:25.687 "block_size": 512, 00:07:25.687 "num_blocks": 65536, 00:07:25.687 "uuid": "75291fe4-01ec-4a87-85ad-2f6676fa2305", 00:07:25.687 "assigned_rate_limits": { 00:07:25.687 "rw_ios_per_sec": 0, 00:07:25.687 "rw_mbytes_per_sec": 0, 00:07:25.687 "r_mbytes_per_sec": 0, 00:07:25.687 "w_mbytes_per_sec": 0 00:07:25.687 }, 00:07:25.687 "claimed": true, 00:07:25.687 "claim_type": "exclusive_write", 00:07:25.687 "zoned": false, 00:07:25.687 "supported_io_types": { 00:07:25.687 "read": true, 00:07:25.687 "write": true, 00:07:25.687 "unmap": true, 00:07:25.687 "flush": true, 00:07:25.687 "reset": true, 00:07:25.687 "nvme_admin": false, 00:07:25.687 "nvme_io": false, 00:07:25.687 "nvme_io_md": false, 00:07:25.687 "write_zeroes": true, 00:07:25.687 "zcopy": true, 00:07:25.687 "get_zone_info": false, 00:07:25.687 "zone_management": false, 00:07:25.687 "zone_append": false, 00:07:25.687 "compare": false, 00:07:25.687 "compare_and_write": false, 00:07:25.687 "abort": true, 00:07:25.687 "seek_hole": false, 00:07:25.687 "seek_data": false, 00:07:25.687 "copy": true, 00:07:25.687 "nvme_iov_md": false 00:07:25.687 }, 00:07:25.687 "memory_domains": [ 00:07:25.687 { 00:07:25.687 "dma_device_id": "system", 00:07:25.687 "dma_device_type": 1 00:07:25.687 }, 00:07:25.687 { 00:07:25.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.687 "dma_device_type": 2 00:07:25.687 } 00:07:25.687 ], 00:07:25.687 "driver_specific": {} 00:07:25.687 } 00:07:25.687 ] 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.687 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.687 "name": "Existed_Raid", 00:07:25.687 "uuid": "30e30454-0b50-4940-8ad0-f2ddf8189945", 00:07:25.687 "strip_size_kb": 0, 00:07:25.687 "state": "online", 00:07:25.687 "raid_level": "raid1", 00:07:25.687 "superblock": true, 00:07:25.687 "num_base_bdevs": 2, 00:07:25.687 "num_base_bdevs_discovered": 2, 00:07:25.687 "num_base_bdevs_operational": 2, 00:07:25.688 "base_bdevs_list": [ 00:07:25.688 { 00:07:25.688 "name": "BaseBdev1", 00:07:25.688 "uuid": "5f11592b-5a3c-40d2-999e-f744de59c54e", 00:07:25.688 "is_configured": true, 00:07:25.688 "data_offset": 2048, 00:07:25.688 "data_size": 63488 00:07:25.688 }, 00:07:25.688 { 00:07:25.688 "name": "BaseBdev2", 00:07:25.688 "uuid": "75291fe4-01ec-4a87-85ad-2f6676fa2305", 00:07:25.688 "is_configured": true, 00:07:25.688 "data_offset": 2048, 00:07:25.688 "data_size": 63488 00:07:25.688 } 00:07:25.688 ] 00:07:25.688 }' 00:07:25.688 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.688 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.255 [2024-11-28 02:22:59.737406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.255 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.255 "name": "Existed_Raid", 00:07:26.255 "aliases": [ 00:07:26.255 "30e30454-0b50-4940-8ad0-f2ddf8189945" 00:07:26.255 ], 00:07:26.255 "product_name": "Raid Volume", 00:07:26.255 "block_size": 512, 00:07:26.255 "num_blocks": 63488, 00:07:26.255 "uuid": "30e30454-0b50-4940-8ad0-f2ddf8189945", 00:07:26.255 "assigned_rate_limits": { 00:07:26.255 "rw_ios_per_sec": 0, 00:07:26.255 "rw_mbytes_per_sec": 0, 00:07:26.255 "r_mbytes_per_sec": 0, 00:07:26.255 "w_mbytes_per_sec": 0 00:07:26.255 }, 00:07:26.255 "claimed": false, 00:07:26.255 "zoned": false, 00:07:26.255 "supported_io_types": { 00:07:26.255 "read": true, 00:07:26.255 "write": true, 00:07:26.255 "unmap": false, 00:07:26.255 "flush": false, 00:07:26.255 "reset": true, 00:07:26.255 "nvme_admin": false, 00:07:26.255 "nvme_io": false, 00:07:26.255 "nvme_io_md": false, 00:07:26.255 "write_zeroes": true, 00:07:26.255 "zcopy": false, 00:07:26.255 "get_zone_info": false, 00:07:26.255 "zone_management": false, 00:07:26.255 "zone_append": false, 00:07:26.255 "compare": false, 00:07:26.255 "compare_and_write": false, 00:07:26.255 "abort": false, 00:07:26.255 "seek_hole": false, 00:07:26.255 "seek_data": false, 00:07:26.255 "copy": false, 00:07:26.255 "nvme_iov_md": false 00:07:26.255 }, 00:07:26.255 "memory_domains": [ 00:07:26.255 { 00:07:26.255 "dma_device_id": "system", 00:07:26.255 "dma_device_type": 1 00:07:26.255 }, 00:07:26.255 { 00:07:26.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.255 "dma_device_type": 2 00:07:26.255 }, 00:07:26.255 { 00:07:26.255 "dma_device_id": "system", 00:07:26.255 "dma_device_type": 1 00:07:26.255 }, 00:07:26.255 { 00:07:26.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.255 "dma_device_type": 2 00:07:26.255 } 00:07:26.255 ], 00:07:26.255 "driver_specific": { 00:07:26.255 "raid": { 00:07:26.255 "uuid": "30e30454-0b50-4940-8ad0-f2ddf8189945", 00:07:26.255 "strip_size_kb": 0, 00:07:26.255 "state": "online", 00:07:26.255 "raid_level": "raid1", 00:07:26.256 "superblock": true, 00:07:26.256 "num_base_bdevs": 2, 00:07:26.256 "num_base_bdevs_discovered": 2, 00:07:26.256 "num_base_bdevs_operational": 2, 00:07:26.256 "base_bdevs_list": [ 00:07:26.256 { 00:07:26.256 "name": "BaseBdev1", 00:07:26.256 "uuid": "5f11592b-5a3c-40d2-999e-f744de59c54e", 00:07:26.256 "is_configured": true, 00:07:26.256 "data_offset": 2048, 00:07:26.256 "data_size": 63488 00:07:26.256 }, 00:07:26.256 { 00:07:26.256 "name": "BaseBdev2", 00:07:26.256 "uuid": "75291fe4-01ec-4a87-85ad-2f6676fa2305", 00:07:26.256 "is_configured": true, 00:07:26.256 "data_offset": 2048, 00:07:26.256 "data_size": 63488 00:07:26.256 } 00:07:26.256 ] 00:07:26.256 } 00:07:26.256 } 00:07:26.256 }' 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:26.256 BaseBdev2' 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.256 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.515 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.515 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.515 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:26.515 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.515 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.515 [2024-11-28 02:22:59.948828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.515 "name": "Existed_Raid", 00:07:26.515 "uuid": "30e30454-0b50-4940-8ad0-f2ddf8189945", 00:07:26.515 "strip_size_kb": 0, 00:07:26.515 "state": "online", 00:07:26.515 "raid_level": "raid1", 00:07:26.515 "superblock": true, 00:07:26.515 "num_base_bdevs": 2, 00:07:26.515 "num_base_bdevs_discovered": 1, 00:07:26.515 "num_base_bdevs_operational": 1, 00:07:26.515 "base_bdevs_list": [ 00:07:26.515 { 00:07:26.515 "name": null, 00:07:26.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.515 "is_configured": false, 00:07:26.515 "data_offset": 0, 00:07:26.515 "data_size": 63488 00:07:26.515 }, 00:07:26.515 { 00:07:26.515 "name": "BaseBdev2", 00:07:26.515 "uuid": "75291fe4-01ec-4a87-85ad-2f6676fa2305", 00:07:26.515 "is_configured": true, 00:07:26.515 "data_offset": 2048, 00:07:26.515 "data_size": 63488 00:07:26.515 } 00:07:26.515 ] 00:07:26.515 }' 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.515 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.774 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:26.774 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:26.774 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.774 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:26.774 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.774 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.774 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.034 [2024-11-28 02:23:00.464076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:27.034 [2024-11-28 02:23:00.464183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.034 [2024-11-28 02:23:00.557797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.034 [2024-11-28 02:23:00.557853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.034 [2024-11-28 02:23:00.557865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62786 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62786 ']' 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62786 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62786 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.034 killing process with pid 62786 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62786' 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62786 00:07:27.034 [2024-11-28 02:23:00.641261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.034 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62786 00:07:27.034 [2024-11-28 02:23:00.658056] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.412 02:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:28.412 00:07:28.412 real 0m4.814s 00:07:28.412 user 0m6.923s 00:07:28.412 sys 0m0.778s 00:07:28.412 02:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.412 02:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.412 ************************************ 00:07:28.412 END TEST raid_state_function_test_sb 00:07:28.412 ************************************ 00:07:28.412 02:23:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:28.412 02:23:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:28.412 02:23:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.412 02:23:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.412 ************************************ 00:07:28.412 START TEST raid_superblock_test 00:07:28.412 ************************************ 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63027 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63027 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63027 ']' 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.412 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.412 [2024-11-28 02:23:01.910452] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:28.413 [2024-11-28 02:23:01.910655] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63027 ] 00:07:28.413 [2024-11-28 02:23:02.081381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.673 [2024-11-28 02:23:02.196440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.932 [2024-11-28 02:23:02.393866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.932 [2024-11-28 02:23:02.393937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.193 malloc1 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.193 [2024-11-28 02:23:02.793259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:29.193 [2024-11-28 02:23:02.793322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.193 [2024-11-28 02:23:02.793342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:29.193 [2024-11-28 02:23:02.793351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.193 [2024-11-28 02:23:02.795414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.193 [2024-11-28 02:23:02.795451] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:29.193 pt1 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.193 malloc2 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.193 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.193 [2024-11-28 02:23:02.844473] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:29.193 [2024-11-28 02:23:02.844527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.193 [2024-11-28 02:23:02.844552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:29.193 [2024-11-28 02:23:02.844561] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.193 [2024-11-28 02:23:02.846520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.193 [2024-11-28 02:23:02.846554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:29.193 pt2 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.194 [2024-11-28 02:23:02.852499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:29.194 [2024-11-28 02:23:02.854261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:29.194 [2024-11-28 02:23:02.854420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:29.194 [2024-11-28 02:23:02.854436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:29.194 [2024-11-28 02:23:02.854666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:29.194 [2024-11-28 02:23:02.854807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:29.194 [2024-11-28 02:23:02.854822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:29.194 [2024-11-28 02:23:02.854957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.194 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.454 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.454 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.454 "name": "raid_bdev1", 00:07:29.454 "uuid": "20e2e4a9-457c-4b10-8fbf-7169e52f1027", 00:07:29.454 "strip_size_kb": 0, 00:07:29.454 "state": "online", 00:07:29.454 "raid_level": "raid1", 00:07:29.454 "superblock": true, 00:07:29.454 "num_base_bdevs": 2, 00:07:29.454 "num_base_bdevs_discovered": 2, 00:07:29.454 "num_base_bdevs_operational": 2, 00:07:29.454 "base_bdevs_list": [ 00:07:29.454 { 00:07:29.454 "name": "pt1", 00:07:29.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:29.454 "is_configured": true, 00:07:29.454 "data_offset": 2048, 00:07:29.454 "data_size": 63488 00:07:29.454 }, 00:07:29.454 { 00:07:29.454 "name": "pt2", 00:07:29.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.454 "is_configured": true, 00:07:29.454 "data_offset": 2048, 00:07:29.454 "data_size": 63488 00:07:29.454 } 00:07:29.454 ] 00:07:29.454 }' 00:07:29.454 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.454 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.714 [2024-11-28 02:23:03.311993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.714 "name": "raid_bdev1", 00:07:29.714 "aliases": [ 00:07:29.714 "20e2e4a9-457c-4b10-8fbf-7169e52f1027" 00:07:29.714 ], 00:07:29.714 "product_name": "Raid Volume", 00:07:29.714 "block_size": 512, 00:07:29.714 "num_blocks": 63488, 00:07:29.714 "uuid": "20e2e4a9-457c-4b10-8fbf-7169e52f1027", 00:07:29.714 "assigned_rate_limits": { 00:07:29.714 "rw_ios_per_sec": 0, 00:07:29.714 "rw_mbytes_per_sec": 0, 00:07:29.714 "r_mbytes_per_sec": 0, 00:07:29.714 "w_mbytes_per_sec": 0 00:07:29.714 }, 00:07:29.714 "claimed": false, 00:07:29.714 "zoned": false, 00:07:29.714 "supported_io_types": { 00:07:29.714 "read": true, 00:07:29.714 "write": true, 00:07:29.714 "unmap": false, 00:07:29.714 "flush": false, 00:07:29.714 "reset": true, 00:07:29.714 "nvme_admin": false, 00:07:29.714 "nvme_io": false, 00:07:29.714 "nvme_io_md": false, 00:07:29.714 "write_zeroes": true, 00:07:29.714 "zcopy": false, 00:07:29.714 "get_zone_info": false, 00:07:29.714 "zone_management": false, 00:07:29.714 "zone_append": false, 00:07:29.714 "compare": false, 00:07:29.714 "compare_and_write": false, 00:07:29.714 "abort": false, 00:07:29.714 "seek_hole": false, 00:07:29.714 "seek_data": false, 00:07:29.714 "copy": false, 00:07:29.714 "nvme_iov_md": false 00:07:29.714 }, 00:07:29.714 "memory_domains": [ 00:07:29.714 { 00:07:29.714 "dma_device_id": "system", 00:07:29.714 "dma_device_type": 1 00:07:29.714 }, 00:07:29.714 { 00:07:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.714 "dma_device_type": 2 00:07:29.714 }, 00:07:29.714 { 00:07:29.714 "dma_device_id": "system", 00:07:29.714 "dma_device_type": 1 00:07:29.714 }, 00:07:29.714 { 00:07:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.714 "dma_device_type": 2 00:07:29.714 } 00:07:29.714 ], 00:07:29.714 "driver_specific": { 00:07:29.714 "raid": { 00:07:29.714 "uuid": "20e2e4a9-457c-4b10-8fbf-7169e52f1027", 00:07:29.714 "strip_size_kb": 0, 00:07:29.714 "state": "online", 00:07:29.714 "raid_level": "raid1", 00:07:29.714 "superblock": true, 00:07:29.714 "num_base_bdevs": 2, 00:07:29.714 "num_base_bdevs_discovered": 2, 00:07:29.714 "num_base_bdevs_operational": 2, 00:07:29.714 "base_bdevs_list": [ 00:07:29.714 { 00:07:29.714 "name": "pt1", 00:07:29.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:29.714 "is_configured": true, 00:07:29.714 "data_offset": 2048, 00:07:29.714 "data_size": 63488 00:07:29.714 }, 00:07:29.714 { 00:07:29.714 "name": "pt2", 00:07:29.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.714 "is_configured": true, 00:07:29.714 "data_offset": 2048, 00:07:29.714 "data_size": 63488 00:07:29.714 } 00:07:29.714 ] 00:07:29.714 } 00:07:29.714 } 00:07:29.714 }' 00:07:29.714 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:29.715 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:29.715 pt2' 00:07:29.715 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:29.975 [2024-11-28 02:23:03.495685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=20e2e4a9-457c-4b10-8fbf-7169e52f1027 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 20e2e4a9-457c-4b10-8fbf-7169e52f1027 ']' 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.975 [2024-11-28 02:23:03.543286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.975 [2024-11-28 02:23:03.543319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.975 [2024-11-28 02:23:03.543405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.975 [2024-11-28 02:23:03.543465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.975 [2024-11-28 02:23:03.543477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:29.975 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.235 [2024-11-28 02:23:03.663127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:30.235 [2024-11-28 02:23:03.664945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:30.235 [2024-11-28 02:23:03.665058] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:30.235 [2024-11-28 02:23:03.665115] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:30.235 [2024-11-28 02:23:03.665130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.235 [2024-11-28 02:23:03.665140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:30.235 request: 00:07:30.235 { 00:07:30.235 "name": "raid_bdev1", 00:07:30.235 "raid_level": "raid1", 00:07:30.235 "base_bdevs": [ 00:07:30.235 "malloc1", 00:07:30.235 "malloc2" 00:07:30.235 ], 00:07:30.235 "superblock": false, 00:07:30.235 "method": "bdev_raid_create", 00:07:30.235 "req_id": 1 00:07:30.235 } 00:07:30.235 Got JSON-RPC error response 00:07:30.235 response: 00:07:30.235 { 00:07:30.235 "code": -17, 00:07:30.235 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:30.235 } 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.235 [2024-11-28 02:23:03.726977] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:30.235 [2024-11-28 02:23:03.727027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.235 [2024-11-28 02:23:03.727045] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:30.235 [2024-11-28 02:23:03.727056] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.235 [2024-11-28 02:23:03.729212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.235 [2024-11-28 02:23:03.729249] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:30.235 [2024-11-28 02:23:03.729322] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:30.235 [2024-11-28 02:23:03.729377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:30.235 pt1 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.235 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.236 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.236 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.236 "name": "raid_bdev1", 00:07:30.236 "uuid": "20e2e4a9-457c-4b10-8fbf-7169e52f1027", 00:07:30.236 "strip_size_kb": 0, 00:07:30.236 "state": "configuring", 00:07:30.236 "raid_level": "raid1", 00:07:30.236 "superblock": true, 00:07:30.236 "num_base_bdevs": 2, 00:07:30.236 "num_base_bdevs_discovered": 1, 00:07:30.236 "num_base_bdevs_operational": 2, 00:07:30.236 "base_bdevs_list": [ 00:07:30.236 { 00:07:30.236 "name": "pt1", 00:07:30.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.236 "is_configured": true, 00:07:30.236 "data_offset": 2048, 00:07:30.236 "data_size": 63488 00:07:30.236 }, 00:07:30.236 { 00:07:30.236 "name": null, 00:07:30.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.236 "is_configured": false, 00:07:30.236 "data_offset": 2048, 00:07:30.236 "data_size": 63488 00:07:30.236 } 00:07:30.236 ] 00:07:30.236 }' 00:07:30.236 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.236 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.496 [2024-11-28 02:23:04.158289] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:30.496 [2024-11-28 02:23:04.158421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.496 [2024-11-28 02:23:04.158464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:30.496 [2024-11-28 02:23:04.158498] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.496 [2024-11-28 02:23:04.159077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.496 [2024-11-28 02:23:04.159148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:30.496 [2024-11-28 02:23:04.159268] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:30.496 [2024-11-28 02:23:04.159334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:30.496 [2024-11-28 02:23:04.159472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:30.496 [2024-11-28 02:23:04.159512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:30.496 [2024-11-28 02:23:04.159776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:30.496 [2024-11-28 02:23:04.159971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:30.496 [2024-11-28 02:23:04.160013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:30.496 [2024-11-28 02:23:04.160207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.496 pt2 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.496 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.757 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.757 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.757 "name": "raid_bdev1", 00:07:30.757 "uuid": "20e2e4a9-457c-4b10-8fbf-7169e52f1027", 00:07:30.757 "strip_size_kb": 0, 00:07:30.757 "state": "online", 00:07:30.757 "raid_level": "raid1", 00:07:30.757 "superblock": true, 00:07:30.757 "num_base_bdevs": 2, 00:07:30.757 "num_base_bdevs_discovered": 2, 00:07:30.757 "num_base_bdevs_operational": 2, 00:07:30.757 "base_bdevs_list": [ 00:07:30.757 { 00:07:30.757 "name": "pt1", 00:07:30.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.757 "is_configured": true, 00:07:30.757 "data_offset": 2048, 00:07:30.757 "data_size": 63488 00:07:30.757 }, 00:07:30.757 { 00:07:30.757 "name": "pt2", 00:07:30.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.757 "is_configured": true, 00:07:30.757 "data_offset": 2048, 00:07:30.757 "data_size": 63488 00:07:30.757 } 00:07:30.757 ] 00:07:30.757 }' 00:07:30.757 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.757 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.017 [2024-11-28 02:23:04.605790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.017 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.017 "name": "raid_bdev1", 00:07:31.017 "aliases": [ 00:07:31.017 "20e2e4a9-457c-4b10-8fbf-7169e52f1027" 00:07:31.017 ], 00:07:31.017 "product_name": "Raid Volume", 00:07:31.017 "block_size": 512, 00:07:31.017 "num_blocks": 63488, 00:07:31.017 "uuid": "20e2e4a9-457c-4b10-8fbf-7169e52f1027", 00:07:31.017 "assigned_rate_limits": { 00:07:31.017 "rw_ios_per_sec": 0, 00:07:31.017 "rw_mbytes_per_sec": 0, 00:07:31.017 "r_mbytes_per_sec": 0, 00:07:31.017 "w_mbytes_per_sec": 0 00:07:31.017 }, 00:07:31.017 "claimed": false, 00:07:31.017 "zoned": false, 00:07:31.017 "supported_io_types": { 00:07:31.017 "read": true, 00:07:31.017 "write": true, 00:07:31.017 "unmap": false, 00:07:31.017 "flush": false, 00:07:31.017 "reset": true, 00:07:31.017 "nvme_admin": false, 00:07:31.017 "nvme_io": false, 00:07:31.017 "nvme_io_md": false, 00:07:31.017 "write_zeroes": true, 00:07:31.017 "zcopy": false, 00:07:31.017 "get_zone_info": false, 00:07:31.017 "zone_management": false, 00:07:31.017 "zone_append": false, 00:07:31.017 "compare": false, 00:07:31.017 "compare_and_write": false, 00:07:31.017 "abort": false, 00:07:31.017 "seek_hole": false, 00:07:31.017 "seek_data": false, 00:07:31.017 "copy": false, 00:07:31.017 "nvme_iov_md": false 00:07:31.017 }, 00:07:31.017 "memory_domains": [ 00:07:31.017 { 00:07:31.017 "dma_device_id": "system", 00:07:31.017 "dma_device_type": 1 00:07:31.017 }, 00:07:31.017 { 00:07:31.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.017 "dma_device_type": 2 00:07:31.017 }, 00:07:31.017 { 00:07:31.017 "dma_device_id": "system", 00:07:31.017 "dma_device_type": 1 00:07:31.017 }, 00:07:31.017 { 00:07:31.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.017 "dma_device_type": 2 00:07:31.017 } 00:07:31.017 ], 00:07:31.017 "driver_specific": { 00:07:31.017 "raid": { 00:07:31.017 "uuid": "20e2e4a9-457c-4b10-8fbf-7169e52f1027", 00:07:31.017 "strip_size_kb": 0, 00:07:31.017 "state": "online", 00:07:31.017 "raid_level": "raid1", 00:07:31.017 "superblock": true, 00:07:31.017 "num_base_bdevs": 2, 00:07:31.017 "num_base_bdevs_discovered": 2, 00:07:31.017 "num_base_bdevs_operational": 2, 00:07:31.017 "base_bdevs_list": [ 00:07:31.017 { 00:07:31.017 "name": "pt1", 00:07:31.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.017 "is_configured": true, 00:07:31.017 "data_offset": 2048, 00:07:31.017 "data_size": 63488 00:07:31.017 }, 00:07:31.017 { 00:07:31.017 "name": "pt2", 00:07:31.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.017 "is_configured": true, 00:07:31.017 "data_offset": 2048, 00:07:31.017 "data_size": 63488 00:07:31.017 } 00:07:31.017 ] 00:07:31.017 } 00:07:31.017 } 00:07:31.017 }' 00:07:31.018 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.018 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:31.018 pt2' 00:07:31.018 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:31.278 [2024-11-28 02:23:04.829356] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 20e2e4a9-457c-4b10-8fbf-7169e52f1027 '!=' 20e2e4a9-457c-4b10-8fbf-7169e52f1027 ']' 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.278 [2024-11-28 02:23:04.877076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.278 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.278 "name": "raid_bdev1", 00:07:31.278 "uuid": "20e2e4a9-457c-4b10-8fbf-7169e52f1027", 00:07:31.278 "strip_size_kb": 0, 00:07:31.278 "state": "online", 00:07:31.278 "raid_level": "raid1", 00:07:31.278 "superblock": true, 00:07:31.278 "num_base_bdevs": 2, 00:07:31.278 "num_base_bdevs_discovered": 1, 00:07:31.278 "num_base_bdevs_operational": 1, 00:07:31.278 "base_bdevs_list": [ 00:07:31.278 { 00:07:31.278 "name": null, 00:07:31.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.279 "is_configured": false, 00:07:31.279 "data_offset": 0, 00:07:31.279 "data_size": 63488 00:07:31.279 }, 00:07:31.279 { 00:07:31.279 "name": "pt2", 00:07:31.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.279 "is_configured": true, 00:07:31.279 "data_offset": 2048, 00:07:31.279 "data_size": 63488 00:07:31.279 } 00:07:31.279 ] 00:07:31.279 }' 00:07:31.279 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.279 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.854 [2024-11-28 02:23:05.356210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.854 [2024-11-28 02:23:05.356286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.854 [2024-11-28 02:23:05.356390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.854 [2024-11-28 02:23:05.356464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.854 [2024-11-28 02:23:05.356512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.854 [2024-11-28 02:23:05.420070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.854 [2024-11-28 02:23:05.420126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.854 [2024-11-28 02:23:05.420142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:31.854 [2024-11-28 02:23:05.420152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.854 [2024-11-28 02:23:05.422280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.854 [2024-11-28 02:23:05.422316] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.854 [2024-11-28 02:23:05.422392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:31.854 [2024-11-28 02:23:05.422444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.854 [2024-11-28 02:23:05.422558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:31.854 [2024-11-28 02:23:05.422574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:31.854 [2024-11-28 02:23:05.422794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:31.854 [2024-11-28 02:23:05.422954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:31.854 [2024-11-28 02:23:05.422964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:31.854 [2024-11-28 02:23:05.423093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.854 pt2 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.854 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.855 "name": "raid_bdev1", 00:07:31.855 "uuid": "20e2e4a9-457c-4b10-8fbf-7169e52f1027", 00:07:31.855 "strip_size_kb": 0, 00:07:31.855 "state": "online", 00:07:31.855 "raid_level": "raid1", 00:07:31.855 "superblock": true, 00:07:31.855 "num_base_bdevs": 2, 00:07:31.855 "num_base_bdevs_discovered": 1, 00:07:31.855 "num_base_bdevs_operational": 1, 00:07:31.855 "base_bdevs_list": [ 00:07:31.855 { 00:07:31.855 "name": null, 00:07:31.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.855 "is_configured": false, 00:07:31.855 "data_offset": 2048, 00:07:31.855 "data_size": 63488 00:07:31.855 }, 00:07:31.855 { 00:07:31.855 "name": "pt2", 00:07:31.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.855 "is_configured": true, 00:07:31.855 "data_offset": 2048, 00:07:31.855 "data_size": 63488 00:07:31.855 } 00:07:31.855 ] 00:07:31.855 }' 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.855 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.425 [2024-11-28 02:23:05.839424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.425 [2024-11-28 02:23:05.839455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.425 [2024-11-28 02:23:05.839527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.425 [2024-11-28 02:23:05.839576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.425 [2024-11-28 02:23:05.839584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.425 [2024-11-28 02:23:05.891345] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:32.425 [2024-11-28 02:23:05.891443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.425 [2024-11-28 02:23:05.891466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:32.425 [2024-11-28 02:23:05.891476] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.425 [2024-11-28 02:23:05.893675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.425 [2024-11-28 02:23:05.893710] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:32.425 [2024-11-28 02:23:05.893787] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:32.425 [2024-11-28 02:23:05.893827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:32.425 [2024-11-28 02:23:05.893983] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:32.425 [2024-11-28 02:23:05.893999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.425 [2024-11-28 02:23:05.894014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:32.425 [2024-11-28 02:23:05.894069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:32.425 [2024-11-28 02:23:05.894136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:32.425 [2024-11-28 02:23:05.894145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:32.425 [2024-11-28 02:23:05.894378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:32.425 [2024-11-28 02:23:05.894529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:32.425 [2024-11-28 02:23:05.894541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:32.425 [2024-11-28 02:23:05.894680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.425 pt1 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.425 "name": "raid_bdev1", 00:07:32.425 "uuid": "20e2e4a9-457c-4b10-8fbf-7169e52f1027", 00:07:32.425 "strip_size_kb": 0, 00:07:32.425 "state": "online", 00:07:32.425 "raid_level": "raid1", 00:07:32.425 "superblock": true, 00:07:32.425 "num_base_bdevs": 2, 00:07:32.425 "num_base_bdevs_discovered": 1, 00:07:32.425 "num_base_bdevs_operational": 1, 00:07:32.425 "base_bdevs_list": [ 00:07:32.425 { 00:07:32.425 "name": null, 00:07:32.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.425 "is_configured": false, 00:07:32.425 "data_offset": 2048, 00:07:32.425 "data_size": 63488 00:07:32.425 }, 00:07:32.425 { 00:07:32.425 "name": "pt2", 00:07:32.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.425 "is_configured": true, 00:07:32.425 "data_offset": 2048, 00:07:32.425 "data_size": 63488 00:07:32.425 } 00:07:32.425 ] 00:07:32.425 }' 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.425 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:32.688 [2024-11-28 02:23:06.350784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.688 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 20e2e4a9-457c-4b10-8fbf-7169e52f1027 '!=' 20e2e4a9-457c-4b10-8fbf-7169e52f1027 ']' 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63027 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63027 ']' 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63027 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63027 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.948 killing process with pid 63027 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63027' 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63027 00:07:32.948 [2024-11-28 02:23:06.418440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.948 [2024-11-28 02:23:06.418582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.948 [2024-11-28 02:23:06.418635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.948 [2024-11-28 02:23:06.418650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:32.948 02:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63027 00:07:33.208 [2024-11-28 02:23:06.627898] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.146 ************************************ 00:07:34.146 END TEST raid_superblock_test 00:07:34.146 ************************************ 00:07:34.146 02:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:34.146 00:07:34.146 real 0m5.891s 00:07:34.146 user 0m8.924s 00:07:34.146 sys 0m0.978s 00:07:34.146 02:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.146 02:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.146 02:23:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:34.146 02:23:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.146 02:23:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.146 02:23:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.146 ************************************ 00:07:34.146 START TEST raid_read_error_test 00:07:34.146 ************************************ 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LlvgniUgnR 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63357 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63357 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63357 ']' 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.146 02:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.405 [2024-11-28 02:23:07.886336] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:34.405 [2024-11-28 02:23:07.886532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63357 ] 00:07:34.405 [2024-11-28 02:23:08.060727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.665 [2024-11-28 02:23:08.179816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.924 [2024-11-28 02:23:08.380028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.924 [2024-11-28 02:23:08.380072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.184 BaseBdev1_malloc 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.184 true 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.184 [2024-11-28 02:23:08.763987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.184 [2024-11-28 02:23:08.764080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.184 [2024-11-28 02:23:08.764132] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.184 [2024-11-28 02:23:08.764162] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.184 [2024-11-28 02:23:08.766172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.184 [2024-11-28 02:23:08.766243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.184 BaseBdev1 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.184 BaseBdev2_malloc 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.184 true 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.184 [2024-11-28 02:23:08.829470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.184 [2024-11-28 02:23:08.829522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.184 [2024-11-28 02:23:08.829536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.184 [2024-11-28 02:23:08.829546] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.184 [2024-11-28 02:23:08.831544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.184 [2024-11-28 02:23:08.831585] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.184 BaseBdev2 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.184 [2024-11-28 02:23:08.841504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.184 [2024-11-28 02:23:08.843281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.184 [2024-11-28 02:23:08.843472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:35.184 [2024-11-28 02:23:08.843487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:35.184 [2024-11-28 02:23:08.843699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:35.184 [2024-11-28 02:23:08.843861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:35.184 [2024-11-28 02:23:08.843871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:35.184 [2024-11-28 02:23:08.844042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.184 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.185 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.185 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.185 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.185 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.185 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.444 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.444 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.444 "name": "raid_bdev1", 00:07:35.444 "uuid": "5ea77368-5323-4538-81ee-500582da88be", 00:07:35.444 "strip_size_kb": 0, 00:07:35.444 "state": "online", 00:07:35.444 "raid_level": "raid1", 00:07:35.444 "superblock": true, 00:07:35.444 "num_base_bdevs": 2, 00:07:35.444 "num_base_bdevs_discovered": 2, 00:07:35.444 "num_base_bdevs_operational": 2, 00:07:35.444 "base_bdevs_list": [ 00:07:35.444 { 00:07:35.444 "name": "BaseBdev1", 00:07:35.444 "uuid": "3d39fb6d-fbc8-5b12-afe8-cd827ba96b07", 00:07:35.444 "is_configured": true, 00:07:35.444 "data_offset": 2048, 00:07:35.444 "data_size": 63488 00:07:35.444 }, 00:07:35.444 { 00:07:35.444 "name": "BaseBdev2", 00:07:35.444 "uuid": "103f42c5-e5f4-5596-a761-cd1f3fb63ca8", 00:07:35.444 "is_configured": true, 00:07:35.444 "data_offset": 2048, 00:07:35.444 "data_size": 63488 00:07:35.444 } 00:07:35.444 ] 00:07:35.444 }' 00:07:35.444 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.444 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.702 02:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:35.702 02:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:35.702 [2024-11-28 02:23:09.325992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.639 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.639 "name": "raid_bdev1", 00:07:36.639 "uuid": "5ea77368-5323-4538-81ee-500582da88be", 00:07:36.639 "strip_size_kb": 0, 00:07:36.639 "state": "online", 00:07:36.639 "raid_level": "raid1", 00:07:36.639 "superblock": true, 00:07:36.639 "num_base_bdevs": 2, 00:07:36.639 "num_base_bdevs_discovered": 2, 00:07:36.639 "num_base_bdevs_operational": 2, 00:07:36.639 "base_bdevs_list": [ 00:07:36.639 { 00:07:36.639 "name": "BaseBdev1", 00:07:36.639 "uuid": "3d39fb6d-fbc8-5b12-afe8-cd827ba96b07", 00:07:36.639 "is_configured": true, 00:07:36.639 "data_offset": 2048, 00:07:36.639 "data_size": 63488 00:07:36.639 }, 00:07:36.639 { 00:07:36.639 "name": "BaseBdev2", 00:07:36.639 "uuid": "103f42c5-e5f4-5596-a761-cd1f3fb63ca8", 00:07:36.639 "is_configured": true, 00:07:36.639 "data_offset": 2048, 00:07:36.639 "data_size": 63488 00:07:36.640 } 00:07:36.640 ] 00:07:36.640 }' 00:07:36.640 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.640 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.208 [2024-11-28 02:23:10.699686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.208 [2024-11-28 02:23:10.699796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.208 [2024-11-28 02:23:10.702507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.208 [2024-11-28 02:23:10.702590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.208 [2024-11-28 02:23:10.702674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.208 [2024-11-28 02:23:10.702686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:37.208 { 00:07:37.208 "results": [ 00:07:37.208 { 00:07:37.208 "job": "raid_bdev1", 00:07:37.208 "core_mask": "0x1", 00:07:37.208 "workload": "randrw", 00:07:37.208 "percentage": 50, 00:07:37.208 "status": "finished", 00:07:37.208 "queue_depth": 1, 00:07:37.208 "io_size": 131072, 00:07:37.208 "runtime": 1.374728, 00:07:37.208 "iops": 18362.17782717745, 00:07:37.208 "mibps": 2295.272228397181, 00:07:37.208 "io_failed": 0, 00:07:37.208 "io_timeout": 0, 00:07:37.208 "avg_latency_us": 51.867339002018284, 00:07:37.208 "min_latency_us": 23.252401746724892, 00:07:37.208 "max_latency_us": 1395.1441048034935 00:07:37.208 } 00:07:37.208 ], 00:07:37.208 "core_count": 1 00:07:37.208 } 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63357 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63357 ']' 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63357 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63357 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.208 killing process with pid 63357 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63357' 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63357 00:07:37.208 [2024-11-28 02:23:10.754151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.208 02:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63357 00:07:37.467 [2024-11-28 02:23:10.891173] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.406 02:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LlvgniUgnR 00:07:38.406 02:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:38.406 02:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:38.406 02:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:38.406 02:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:38.406 ************************************ 00:07:38.406 END TEST raid_read_error_test 00:07:38.406 ************************************ 00:07:38.406 02:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.406 02:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:38.406 02:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:38.406 00:07:38.406 real 0m4.291s 00:07:38.406 user 0m5.116s 00:07:38.406 sys 0m0.528s 00:07:38.406 02:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.406 02:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.666 02:23:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:38.666 02:23:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.666 02:23:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.666 02:23:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.666 ************************************ 00:07:38.666 START TEST raid_write_error_test 00:07:38.666 ************************************ 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HVQD772n1P 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63497 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63497 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63497 ']' 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.666 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.666 [2024-11-28 02:23:12.248343] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:38.666 [2024-11-28 02:23:12.248822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63497 ] 00:07:38.926 [2024-11-28 02:23:12.423497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.926 [2024-11-28 02:23:12.531926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.186 [2024-11-28 02:23:12.734554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.186 [2024-11-28 02:23:12.734617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.445 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.445 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:39.445 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.445 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.445 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.445 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 BaseBdev1_malloc 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 true 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 [2024-11-28 02:23:13.146260] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.705 [2024-11-28 02:23:13.146534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.705 [2024-11-28 02:23:13.146638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.705 [2024-11-28 02:23:13.146716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.705 [2024-11-28 02:23:13.149026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.705 [2024-11-28 02:23:13.149186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.705 BaseBdev1 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 BaseBdev2_malloc 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 true 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 [2024-11-28 02:23:13.213152] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.705 [2024-11-28 02:23:13.213456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.705 [2024-11-28 02:23:13.213549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.705 [2024-11-28 02:23:13.213633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.705 [2024-11-28 02:23:13.215666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.705 [2024-11-28 02:23:13.215798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.705 BaseBdev2 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 [2024-11-28 02:23:13.225181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.705 [2024-11-28 02:23:13.227057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.705 [2024-11-28 02:23:13.227286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.705 [2024-11-28 02:23:13.227341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:39.705 [2024-11-28 02:23:13.227643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:39.705 [2024-11-28 02:23:13.227866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.705 [2024-11-28 02:23:13.227913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:39.705 [2024-11-28 02:23:13.228114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.705 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.706 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.706 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.706 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.706 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.706 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.706 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.706 "name": "raid_bdev1", 00:07:39.706 "uuid": "6bb9ce41-b78d-44dc-8d22-1645394f4976", 00:07:39.706 "strip_size_kb": 0, 00:07:39.706 "state": "online", 00:07:39.706 "raid_level": "raid1", 00:07:39.706 "superblock": true, 00:07:39.706 "num_base_bdevs": 2, 00:07:39.706 "num_base_bdevs_discovered": 2, 00:07:39.706 "num_base_bdevs_operational": 2, 00:07:39.706 "base_bdevs_list": [ 00:07:39.706 { 00:07:39.706 "name": "BaseBdev1", 00:07:39.706 "uuid": "43e901a1-0c13-5c07-9c35-df984868955d", 00:07:39.706 "is_configured": true, 00:07:39.706 "data_offset": 2048, 00:07:39.706 "data_size": 63488 00:07:39.706 }, 00:07:39.706 { 00:07:39.706 "name": "BaseBdev2", 00:07:39.706 "uuid": "e87a7d2d-d0fc-5c83-95ab-ece188c68b24", 00:07:39.706 "is_configured": true, 00:07:39.706 "data_offset": 2048, 00:07:39.706 "data_size": 63488 00:07:39.706 } 00:07:39.706 ] 00:07:39.706 }' 00:07:39.706 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.706 02:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.056 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:40.056 02:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:40.331 [2024-11-28 02:23:13.781677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.271 [2024-11-28 02:23:14.701507] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:41.271 [2024-11-28 02:23:14.702034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:41.271 [2024-11-28 02:23:14.702322] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.271 "name": "raid_bdev1", 00:07:41.271 "uuid": "6bb9ce41-b78d-44dc-8d22-1645394f4976", 00:07:41.271 "strip_size_kb": 0, 00:07:41.271 "state": "online", 00:07:41.271 "raid_level": "raid1", 00:07:41.271 "superblock": true, 00:07:41.271 "num_base_bdevs": 2, 00:07:41.271 "num_base_bdevs_discovered": 1, 00:07:41.271 "num_base_bdevs_operational": 1, 00:07:41.271 "base_bdevs_list": [ 00:07:41.271 { 00:07:41.271 "name": null, 00:07:41.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.271 "is_configured": false, 00:07:41.271 "data_offset": 0, 00:07:41.271 "data_size": 63488 00:07:41.271 }, 00:07:41.271 { 00:07:41.271 "name": "BaseBdev2", 00:07:41.271 "uuid": "e87a7d2d-d0fc-5c83-95ab-ece188c68b24", 00:07:41.271 "is_configured": true, 00:07:41.271 "data_offset": 2048, 00:07:41.271 "data_size": 63488 00:07:41.271 } 00:07:41.271 ] 00:07:41.271 }' 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.271 02:23:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.532 02:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.532 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.532 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.532 [2024-11-28 02:23:15.170460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.532 [2024-11-28 02:23:15.170538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.532 [2024-11-28 02:23:15.173194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.532 [2024-11-28 02:23:15.173275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.532 [2024-11-28 02:23:15.173351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.532 [2024-11-28 02:23:15.173413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:41.532 { 00:07:41.532 "results": [ 00:07:41.532 { 00:07:41.532 "job": "raid_bdev1", 00:07:41.532 "core_mask": "0x1", 00:07:41.532 "workload": "randrw", 00:07:41.532 "percentage": 50, 00:07:41.532 "status": "finished", 00:07:41.532 "queue_depth": 1, 00:07:41.532 "io_size": 131072, 00:07:41.532 "runtime": 1.389774, 00:07:41.532 "iops": 21386.210995456815, 00:07:41.532 "mibps": 2673.276374432102, 00:07:41.532 "io_failed": 0, 00:07:41.532 "io_timeout": 0, 00:07:41.532 "avg_latency_us": 44.11431439343742, 00:07:41.532 "min_latency_us": 22.91703056768559, 00:07:41.532 "max_latency_us": 1395.1441048034935 00:07:41.532 } 00:07:41.532 ], 00:07:41.532 "core_count": 1 00:07:41.532 } 00:07:41.532 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.532 02:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63497 00:07:41.532 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63497 ']' 00:07:41.532 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63497 00:07:41.532 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:41.532 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.532 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63497 00:07:41.791 killing process with pid 63497 00:07:41.791 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.791 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.791 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63497' 00:07:41.791 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63497 00:07:41.792 [2024-11-28 02:23:15.217165] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.792 02:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63497 00:07:41.792 [2024-11-28 02:23:15.353195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.173 02:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HVQD772n1P 00:07:43.173 02:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:43.173 02:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:43.173 ************************************ 00:07:43.173 END TEST raid_write_error_test 00:07:43.173 ************************************ 00:07:43.173 02:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:43.173 02:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:43.173 02:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.173 02:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:43.173 02:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:43.173 00:07:43.173 real 0m4.401s 00:07:43.173 user 0m5.313s 00:07:43.173 sys 0m0.536s 00:07:43.173 02:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.173 02:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.173 02:23:16 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:43.173 02:23:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:43.173 02:23:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:43.173 02:23:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.173 02:23:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.173 02:23:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.173 ************************************ 00:07:43.173 START TEST raid_state_function_test 00:07:43.173 ************************************ 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63641 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:43.173 Process raid pid: 63641 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63641' 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63641 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63641 ']' 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.173 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.173 [2024-11-28 02:23:16.715095] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:43.173 [2024-11-28 02:23:16.715688] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.433 [2024-11-28 02:23:16.869213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.433 [2024-11-28 02:23:16.979892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.693 [2024-11-28 02:23:17.180286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.693 [2024-11-28 02:23:17.180402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.953 [2024-11-28 02:23:17.539202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.953 [2024-11-28 02:23:17.539296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.953 [2024-11-28 02:23:17.539332] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.953 [2024-11-28 02:23:17.539372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.953 [2024-11-28 02:23:17.539391] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:43.953 [2024-11-28 02:23:17.539413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.953 "name": "Existed_Raid", 00:07:43.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.953 "strip_size_kb": 64, 00:07:43.953 "state": "configuring", 00:07:43.953 "raid_level": "raid0", 00:07:43.953 "superblock": false, 00:07:43.953 "num_base_bdevs": 3, 00:07:43.953 "num_base_bdevs_discovered": 0, 00:07:43.953 "num_base_bdevs_operational": 3, 00:07:43.953 "base_bdevs_list": [ 00:07:43.953 { 00:07:43.953 "name": "BaseBdev1", 00:07:43.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.953 "is_configured": false, 00:07:43.953 "data_offset": 0, 00:07:43.953 "data_size": 0 00:07:43.953 }, 00:07:43.953 { 00:07:43.953 "name": "BaseBdev2", 00:07:43.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.953 "is_configured": false, 00:07:43.953 "data_offset": 0, 00:07:43.953 "data_size": 0 00:07:43.953 }, 00:07:43.953 { 00:07:43.953 "name": "BaseBdev3", 00:07:43.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.953 "is_configured": false, 00:07:43.953 "data_offset": 0, 00:07:43.953 "data_size": 0 00:07:43.953 } 00:07:43.953 ] 00:07:43.953 }' 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.953 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 [2024-11-28 02:23:18.018309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.523 [2024-11-28 02:23:18.018390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 [2024-11-28 02:23:18.030291] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.523 [2024-11-28 02:23:18.030387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.523 [2024-11-28 02:23:18.030415] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.523 [2024-11-28 02:23:18.030450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.523 [2024-11-28 02:23:18.030468] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:44.523 [2024-11-28 02:23:18.030505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 [2024-11-28 02:23:18.077450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.523 BaseBdev1 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.523 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 [ 00:07:44.523 { 00:07:44.523 "name": "BaseBdev1", 00:07:44.523 "aliases": [ 00:07:44.523 "eeec0108-69f1-4ca3-ae17-f597e68b8bd1" 00:07:44.523 ], 00:07:44.523 "product_name": "Malloc disk", 00:07:44.523 "block_size": 512, 00:07:44.523 "num_blocks": 65536, 00:07:44.523 "uuid": "eeec0108-69f1-4ca3-ae17-f597e68b8bd1", 00:07:44.523 "assigned_rate_limits": { 00:07:44.523 "rw_ios_per_sec": 0, 00:07:44.523 "rw_mbytes_per_sec": 0, 00:07:44.523 "r_mbytes_per_sec": 0, 00:07:44.523 "w_mbytes_per_sec": 0 00:07:44.523 }, 00:07:44.523 "claimed": true, 00:07:44.523 "claim_type": "exclusive_write", 00:07:44.523 "zoned": false, 00:07:44.523 "supported_io_types": { 00:07:44.523 "read": true, 00:07:44.523 "write": true, 00:07:44.524 "unmap": true, 00:07:44.524 "flush": true, 00:07:44.524 "reset": true, 00:07:44.524 "nvme_admin": false, 00:07:44.524 "nvme_io": false, 00:07:44.524 "nvme_io_md": false, 00:07:44.524 "write_zeroes": true, 00:07:44.524 "zcopy": true, 00:07:44.524 "get_zone_info": false, 00:07:44.524 "zone_management": false, 00:07:44.524 "zone_append": false, 00:07:44.524 "compare": false, 00:07:44.524 "compare_and_write": false, 00:07:44.524 "abort": true, 00:07:44.524 "seek_hole": false, 00:07:44.524 "seek_data": false, 00:07:44.524 "copy": true, 00:07:44.524 "nvme_iov_md": false 00:07:44.524 }, 00:07:44.524 "memory_domains": [ 00:07:44.524 { 00:07:44.524 "dma_device_id": "system", 00:07:44.524 "dma_device_type": 1 00:07:44.524 }, 00:07:44.524 { 00:07:44.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.524 "dma_device_type": 2 00:07:44.524 } 00:07:44.524 ], 00:07:44.524 "driver_specific": {} 00:07:44.524 } 00:07:44.524 ] 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.524 "name": "Existed_Raid", 00:07:44.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.524 "strip_size_kb": 64, 00:07:44.524 "state": "configuring", 00:07:44.524 "raid_level": "raid0", 00:07:44.524 "superblock": false, 00:07:44.524 "num_base_bdevs": 3, 00:07:44.524 "num_base_bdevs_discovered": 1, 00:07:44.524 "num_base_bdevs_operational": 3, 00:07:44.524 "base_bdevs_list": [ 00:07:44.524 { 00:07:44.524 "name": "BaseBdev1", 00:07:44.524 "uuid": "eeec0108-69f1-4ca3-ae17-f597e68b8bd1", 00:07:44.524 "is_configured": true, 00:07:44.524 "data_offset": 0, 00:07:44.524 "data_size": 65536 00:07:44.524 }, 00:07:44.524 { 00:07:44.524 "name": "BaseBdev2", 00:07:44.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.524 "is_configured": false, 00:07:44.524 "data_offset": 0, 00:07:44.524 "data_size": 0 00:07:44.524 }, 00:07:44.524 { 00:07:44.524 "name": "BaseBdev3", 00:07:44.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.524 "is_configured": false, 00:07:44.524 "data_offset": 0, 00:07:44.524 "data_size": 0 00:07:44.524 } 00:07:44.524 ] 00:07:44.524 }' 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.524 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.094 [2024-11-28 02:23:18.552682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.094 [2024-11-28 02:23:18.552781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.094 [2024-11-28 02:23:18.564700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.094 [2024-11-28 02:23:18.566590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.094 [2024-11-28 02:23:18.566666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.094 [2024-11-28 02:23:18.566695] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:45.094 [2024-11-28 02:23:18.566717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.094 "name": "Existed_Raid", 00:07:45.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.094 "strip_size_kb": 64, 00:07:45.094 "state": "configuring", 00:07:45.094 "raid_level": "raid0", 00:07:45.094 "superblock": false, 00:07:45.094 "num_base_bdevs": 3, 00:07:45.094 "num_base_bdevs_discovered": 1, 00:07:45.094 "num_base_bdevs_operational": 3, 00:07:45.094 "base_bdevs_list": [ 00:07:45.094 { 00:07:45.094 "name": "BaseBdev1", 00:07:45.094 "uuid": "eeec0108-69f1-4ca3-ae17-f597e68b8bd1", 00:07:45.094 "is_configured": true, 00:07:45.094 "data_offset": 0, 00:07:45.094 "data_size": 65536 00:07:45.094 }, 00:07:45.094 { 00:07:45.094 "name": "BaseBdev2", 00:07:45.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.094 "is_configured": false, 00:07:45.094 "data_offset": 0, 00:07:45.094 "data_size": 0 00:07:45.094 }, 00:07:45.094 { 00:07:45.094 "name": "BaseBdev3", 00:07:45.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.094 "is_configured": false, 00:07:45.094 "data_offset": 0, 00:07:45.094 "data_size": 0 00:07:45.094 } 00:07:45.094 ] 00:07:45.094 }' 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.094 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.354 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:45.354 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.614 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.614 [2024-11-28 02:23:19.070810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:45.614 BaseBdev2 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.615 [ 00:07:45.615 { 00:07:45.615 "name": "BaseBdev2", 00:07:45.615 "aliases": [ 00:07:45.615 "01ba08a0-e2e0-42f6-aead-54af9e8480d6" 00:07:45.615 ], 00:07:45.615 "product_name": "Malloc disk", 00:07:45.615 "block_size": 512, 00:07:45.615 "num_blocks": 65536, 00:07:45.615 "uuid": "01ba08a0-e2e0-42f6-aead-54af9e8480d6", 00:07:45.615 "assigned_rate_limits": { 00:07:45.615 "rw_ios_per_sec": 0, 00:07:45.615 "rw_mbytes_per_sec": 0, 00:07:45.615 "r_mbytes_per_sec": 0, 00:07:45.615 "w_mbytes_per_sec": 0 00:07:45.615 }, 00:07:45.615 "claimed": true, 00:07:45.615 "claim_type": "exclusive_write", 00:07:45.615 "zoned": false, 00:07:45.615 "supported_io_types": { 00:07:45.615 "read": true, 00:07:45.615 "write": true, 00:07:45.615 "unmap": true, 00:07:45.615 "flush": true, 00:07:45.615 "reset": true, 00:07:45.615 "nvme_admin": false, 00:07:45.615 "nvme_io": false, 00:07:45.615 "nvme_io_md": false, 00:07:45.615 "write_zeroes": true, 00:07:45.615 "zcopy": true, 00:07:45.615 "get_zone_info": false, 00:07:45.615 "zone_management": false, 00:07:45.615 "zone_append": false, 00:07:45.615 "compare": false, 00:07:45.615 "compare_and_write": false, 00:07:45.615 "abort": true, 00:07:45.615 "seek_hole": false, 00:07:45.615 "seek_data": false, 00:07:45.615 "copy": true, 00:07:45.615 "nvme_iov_md": false 00:07:45.615 }, 00:07:45.615 "memory_domains": [ 00:07:45.615 { 00:07:45.615 "dma_device_id": "system", 00:07:45.615 "dma_device_type": 1 00:07:45.615 }, 00:07:45.615 { 00:07:45.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.615 "dma_device_type": 2 00:07:45.615 } 00:07:45.615 ], 00:07:45.615 "driver_specific": {} 00:07:45.615 } 00:07:45.615 ] 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.615 "name": "Existed_Raid", 00:07:45.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.615 "strip_size_kb": 64, 00:07:45.615 "state": "configuring", 00:07:45.615 "raid_level": "raid0", 00:07:45.615 "superblock": false, 00:07:45.615 "num_base_bdevs": 3, 00:07:45.615 "num_base_bdevs_discovered": 2, 00:07:45.615 "num_base_bdevs_operational": 3, 00:07:45.615 "base_bdevs_list": [ 00:07:45.615 { 00:07:45.615 "name": "BaseBdev1", 00:07:45.615 "uuid": "eeec0108-69f1-4ca3-ae17-f597e68b8bd1", 00:07:45.615 "is_configured": true, 00:07:45.615 "data_offset": 0, 00:07:45.615 "data_size": 65536 00:07:45.615 }, 00:07:45.615 { 00:07:45.615 "name": "BaseBdev2", 00:07:45.615 "uuid": "01ba08a0-e2e0-42f6-aead-54af9e8480d6", 00:07:45.615 "is_configured": true, 00:07:45.615 "data_offset": 0, 00:07:45.615 "data_size": 65536 00:07:45.615 }, 00:07:45.615 { 00:07:45.615 "name": "BaseBdev3", 00:07:45.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.615 "is_configured": false, 00:07:45.615 "data_offset": 0, 00:07:45.615 "data_size": 0 00:07:45.615 } 00:07:45.615 ] 00:07:45.615 }' 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.615 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.184 [2024-11-28 02:23:19.613045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:46.184 [2024-11-28 02:23:19.613168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:46.184 [2024-11-28 02:23:19.613186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:46.184 [2024-11-28 02:23:19.613473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:46.184 [2024-11-28 02:23:19.613649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:46.184 [2024-11-28 02:23:19.613659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:46.184 [2024-11-28 02:23:19.613939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.184 BaseBdev3 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.184 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.184 [ 00:07:46.184 { 00:07:46.184 "name": "BaseBdev3", 00:07:46.184 "aliases": [ 00:07:46.184 "a0e11dc5-7118-4cec-bdd7-81b21f8c0fcc" 00:07:46.184 ], 00:07:46.184 "product_name": "Malloc disk", 00:07:46.184 "block_size": 512, 00:07:46.184 "num_blocks": 65536, 00:07:46.184 "uuid": "a0e11dc5-7118-4cec-bdd7-81b21f8c0fcc", 00:07:46.184 "assigned_rate_limits": { 00:07:46.184 "rw_ios_per_sec": 0, 00:07:46.184 "rw_mbytes_per_sec": 0, 00:07:46.184 "r_mbytes_per_sec": 0, 00:07:46.184 "w_mbytes_per_sec": 0 00:07:46.184 }, 00:07:46.184 "claimed": true, 00:07:46.184 "claim_type": "exclusive_write", 00:07:46.184 "zoned": false, 00:07:46.184 "supported_io_types": { 00:07:46.184 "read": true, 00:07:46.184 "write": true, 00:07:46.184 "unmap": true, 00:07:46.184 "flush": true, 00:07:46.184 "reset": true, 00:07:46.184 "nvme_admin": false, 00:07:46.184 "nvme_io": false, 00:07:46.184 "nvme_io_md": false, 00:07:46.184 "write_zeroes": true, 00:07:46.184 "zcopy": true, 00:07:46.184 "get_zone_info": false, 00:07:46.184 "zone_management": false, 00:07:46.184 "zone_append": false, 00:07:46.184 "compare": false, 00:07:46.184 "compare_and_write": false, 00:07:46.184 "abort": true, 00:07:46.184 "seek_hole": false, 00:07:46.184 "seek_data": false, 00:07:46.184 "copy": true, 00:07:46.184 "nvme_iov_md": false 00:07:46.184 }, 00:07:46.184 "memory_domains": [ 00:07:46.184 { 00:07:46.184 "dma_device_id": "system", 00:07:46.184 "dma_device_type": 1 00:07:46.184 }, 00:07:46.185 { 00:07:46.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.185 "dma_device_type": 2 00:07:46.185 } 00:07:46.185 ], 00:07:46.185 "driver_specific": {} 00:07:46.185 } 00:07:46.185 ] 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.185 "name": "Existed_Raid", 00:07:46.185 "uuid": "de89a807-aaef-4e3a-8cd2-4599f810b421", 00:07:46.185 "strip_size_kb": 64, 00:07:46.185 "state": "online", 00:07:46.185 "raid_level": "raid0", 00:07:46.185 "superblock": false, 00:07:46.185 "num_base_bdevs": 3, 00:07:46.185 "num_base_bdevs_discovered": 3, 00:07:46.185 "num_base_bdevs_operational": 3, 00:07:46.185 "base_bdevs_list": [ 00:07:46.185 { 00:07:46.185 "name": "BaseBdev1", 00:07:46.185 "uuid": "eeec0108-69f1-4ca3-ae17-f597e68b8bd1", 00:07:46.185 "is_configured": true, 00:07:46.185 "data_offset": 0, 00:07:46.185 "data_size": 65536 00:07:46.185 }, 00:07:46.185 { 00:07:46.185 "name": "BaseBdev2", 00:07:46.185 "uuid": "01ba08a0-e2e0-42f6-aead-54af9e8480d6", 00:07:46.185 "is_configured": true, 00:07:46.185 "data_offset": 0, 00:07:46.185 "data_size": 65536 00:07:46.185 }, 00:07:46.185 { 00:07:46.185 "name": "BaseBdev3", 00:07:46.185 "uuid": "a0e11dc5-7118-4cec-bdd7-81b21f8c0fcc", 00:07:46.185 "is_configured": true, 00:07:46.185 "data_offset": 0, 00:07:46.185 "data_size": 65536 00:07:46.185 } 00:07:46.185 ] 00:07:46.185 }' 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.185 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.445 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:46.445 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:46.445 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.445 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.445 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.445 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.445 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.445 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:46.445 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.445 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.705 [2024-11-28 02:23:20.124512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.705 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.705 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.705 "name": "Existed_Raid", 00:07:46.705 "aliases": [ 00:07:46.705 "de89a807-aaef-4e3a-8cd2-4599f810b421" 00:07:46.705 ], 00:07:46.705 "product_name": "Raid Volume", 00:07:46.705 "block_size": 512, 00:07:46.705 "num_blocks": 196608, 00:07:46.705 "uuid": "de89a807-aaef-4e3a-8cd2-4599f810b421", 00:07:46.705 "assigned_rate_limits": { 00:07:46.705 "rw_ios_per_sec": 0, 00:07:46.705 "rw_mbytes_per_sec": 0, 00:07:46.705 "r_mbytes_per_sec": 0, 00:07:46.705 "w_mbytes_per_sec": 0 00:07:46.705 }, 00:07:46.705 "claimed": false, 00:07:46.705 "zoned": false, 00:07:46.705 "supported_io_types": { 00:07:46.705 "read": true, 00:07:46.705 "write": true, 00:07:46.705 "unmap": true, 00:07:46.705 "flush": true, 00:07:46.705 "reset": true, 00:07:46.705 "nvme_admin": false, 00:07:46.705 "nvme_io": false, 00:07:46.705 "nvme_io_md": false, 00:07:46.705 "write_zeroes": true, 00:07:46.705 "zcopy": false, 00:07:46.705 "get_zone_info": false, 00:07:46.705 "zone_management": false, 00:07:46.705 "zone_append": false, 00:07:46.705 "compare": false, 00:07:46.705 "compare_and_write": false, 00:07:46.705 "abort": false, 00:07:46.705 "seek_hole": false, 00:07:46.705 "seek_data": false, 00:07:46.705 "copy": false, 00:07:46.705 "nvme_iov_md": false 00:07:46.705 }, 00:07:46.705 "memory_domains": [ 00:07:46.705 { 00:07:46.705 "dma_device_id": "system", 00:07:46.705 "dma_device_type": 1 00:07:46.705 }, 00:07:46.705 { 00:07:46.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.705 "dma_device_type": 2 00:07:46.705 }, 00:07:46.705 { 00:07:46.705 "dma_device_id": "system", 00:07:46.705 "dma_device_type": 1 00:07:46.705 }, 00:07:46.705 { 00:07:46.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.705 "dma_device_type": 2 00:07:46.705 }, 00:07:46.705 { 00:07:46.705 "dma_device_id": "system", 00:07:46.705 "dma_device_type": 1 00:07:46.705 }, 00:07:46.705 { 00:07:46.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.705 "dma_device_type": 2 00:07:46.705 } 00:07:46.705 ], 00:07:46.705 "driver_specific": { 00:07:46.705 "raid": { 00:07:46.705 "uuid": "de89a807-aaef-4e3a-8cd2-4599f810b421", 00:07:46.705 "strip_size_kb": 64, 00:07:46.705 "state": "online", 00:07:46.705 "raid_level": "raid0", 00:07:46.705 "superblock": false, 00:07:46.705 "num_base_bdevs": 3, 00:07:46.705 "num_base_bdevs_discovered": 3, 00:07:46.705 "num_base_bdevs_operational": 3, 00:07:46.705 "base_bdevs_list": [ 00:07:46.705 { 00:07:46.705 "name": "BaseBdev1", 00:07:46.705 "uuid": "eeec0108-69f1-4ca3-ae17-f597e68b8bd1", 00:07:46.705 "is_configured": true, 00:07:46.705 "data_offset": 0, 00:07:46.705 "data_size": 65536 00:07:46.705 }, 00:07:46.705 { 00:07:46.705 "name": "BaseBdev2", 00:07:46.705 "uuid": "01ba08a0-e2e0-42f6-aead-54af9e8480d6", 00:07:46.705 "is_configured": true, 00:07:46.705 "data_offset": 0, 00:07:46.705 "data_size": 65536 00:07:46.705 }, 00:07:46.705 { 00:07:46.705 "name": "BaseBdev3", 00:07:46.705 "uuid": "a0e11dc5-7118-4cec-bdd7-81b21f8c0fcc", 00:07:46.705 "is_configured": true, 00:07:46.705 "data_offset": 0, 00:07:46.705 "data_size": 65536 00:07:46.705 } 00:07:46.705 ] 00:07:46.705 } 00:07:46.705 } 00:07:46.705 }' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:46.706 BaseBdev2 00:07:46.706 BaseBdev3' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.706 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.706 [2024-11-28 02:23:20.367882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:46.706 [2024-11-28 02:23:20.367967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.706 [2024-11-28 02:23:20.368053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.965 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.966 "name": "Existed_Raid", 00:07:46.966 "uuid": "de89a807-aaef-4e3a-8cd2-4599f810b421", 00:07:46.966 "strip_size_kb": 64, 00:07:46.966 "state": "offline", 00:07:46.966 "raid_level": "raid0", 00:07:46.966 "superblock": false, 00:07:46.966 "num_base_bdevs": 3, 00:07:46.966 "num_base_bdevs_discovered": 2, 00:07:46.966 "num_base_bdevs_operational": 2, 00:07:46.966 "base_bdevs_list": [ 00:07:46.966 { 00:07:46.966 "name": null, 00:07:46.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.966 "is_configured": false, 00:07:46.966 "data_offset": 0, 00:07:46.966 "data_size": 65536 00:07:46.966 }, 00:07:46.966 { 00:07:46.966 "name": "BaseBdev2", 00:07:46.966 "uuid": "01ba08a0-e2e0-42f6-aead-54af9e8480d6", 00:07:46.966 "is_configured": true, 00:07:46.966 "data_offset": 0, 00:07:46.966 "data_size": 65536 00:07:46.966 }, 00:07:46.966 { 00:07:46.966 "name": "BaseBdev3", 00:07:46.966 "uuid": "a0e11dc5-7118-4cec-bdd7-81b21f8c0fcc", 00:07:46.966 "is_configured": true, 00:07:46.966 "data_offset": 0, 00:07:46.966 "data_size": 65536 00:07:46.966 } 00:07:46.966 ] 00:07:46.966 }' 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.966 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.535 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.535 [2024-11-28 02:23:20.991833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.535 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.535 [2024-11-28 02:23:21.133118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:47.535 [2024-11-28 02:23:21.133208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.796 BaseBdev2 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.796 [ 00:07:47.796 { 00:07:47.796 "name": "BaseBdev2", 00:07:47.796 "aliases": [ 00:07:47.796 "bb805a21-da29-4546-8f89-178f17bd5a29" 00:07:47.796 ], 00:07:47.796 "product_name": "Malloc disk", 00:07:47.796 "block_size": 512, 00:07:47.796 "num_blocks": 65536, 00:07:47.796 "uuid": "bb805a21-da29-4546-8f89-178f17bd5a29", 00:07:47.796 "assigned_rate_limits": { 00:07:47.796 "rw_ios_per_sec": 0, 00:07:47.796 "rw_mbytes_per_sec": 0, 00:07:47.796 "r_mbytes_per_sec": 0, 00:07:47.796 "w_mbytes_per_sec": 0 00:07:47.796 }, 00:07:47.796 "claimed": false, 00:07:47.796 "zoned": false, 00:07:47.796 "supported_io_types": { 00:07:47.796 "read": true, 00:07:47.796 "write": true, 00:07:47.796 "unmap": true, 00:07:47.796 "flush": true, 00:07:47.796 "reset": true, 00:07:47.796 "nvme_admin": false, 00:07:47.796 "nvme_io": false, 00:07:47.796 "nvme_io_md": false, 00:07:47.796 "write_zeroes": true, 00:07:47.796 "zcopy": true, 00:07:47.796 "get_zone_info": false, 00:07:47.796 "zone_management": false, 00:07:47.796 "zone_append": false, 00:07:47.796 "compare": false, 00:07:47.796 "compare_and_write": false, 00:07:47.796 "abort": true, 00:07:47.796 "seek_hole": false, 00:07:47.796 "seek_data": false, 00:07:47.796 "copy": true, 00:07:47.796 "nvme_iov_md": false 00:07:47.796 }, 00:07:47.796 "memory_domains": [ 00:07:47.796 { 00:07:47.796 "dma_device_id": "system", 00:07:47.796 "dma_device_type": 1 00:07:47.796 }, 00:07:47.796 { 00:07:47.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.796 "dma_device_type": 2 00:07:47.796 } 00:07:47.796 ], 00:07:47.796 "driver_specific": {} 00:07:47.796 } 00:07:47.796 ] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.796 BaseBdev3 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.796 [ 00:07:47.796 { 00:07:47.796 "name": "BaseBdev3", 00:07:47.796 "aliases": [ 00:07:47.796 "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7" 00:07:47.796 ], 00:07:47.796 "product_name": "Malloc disk", 00:07:47.796 "block_size": 512, 00:07:47.796 "num_blocks": 65536, 00:07:47.796 "uuid": "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7", 00:07:47.796 "assigned_rate_limits": { 00:07:47.796 "rw_ios_per_sec": 0, 00:07:47.796 "rw_mbytes_per_sec": 0, 00:07:47.796 "r_mbytes_per_sec": 0, 00:07:47.796 "w_mbytes_per_sec": 0 00:07:47.796 }, 00:07:47.796 "claimed": false, 00:07:47.796 "zoned": false, 00:07:47.796 "supported_io_types": { 00:07:47.796 "read": true, 00:07:47.796 "write": true, 00:07:47.796 "unmap": true, 00:07:47.796 "flush": true, 00:07:47.796 "reset": true, 00:07:47.796 "nvme_admin": false, 00:07:47.796 "nvme_io": false, 00:07:47.796 "nvme_io_md": false, 00:07:47.796 "write_zeroes": true, 00:07:47.796 "zcopy": true, 00:07:47.796 "get_zone_info": false, 00:07:47.796 "zone_management": false, 00:07:47.796 "zone_append": false, 00:07:47.796 "compare": false, 00:07:47.796 "compare_and_write": false, 00:07:47.796 "abort": true, 00:07:47.796 "seek_hole": false, 00:07:47.796 "seek_data": false, 00:07:47.796 "copy": true, 00:07:47.796 "nvme_iov_md": false 00:07:47.796 }, 00:07:47.796 "memory_domains": [ 00:07:47.796 { 00:07:47.796 "dma_device_id": "system", 00:07:47.796 "dma_device_type": 1 00:07:47.796 }, 00:07:47.796 { 00:07:47.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.796 "dma_device_type": 2 00:07:47.796 } 00:07:47.796 ], 00:07:47.796 "driver_specific": {} 00:07:47.796 } 00:07:47.796 ] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.796 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.796 [2024-11-28 02:23:21.433037] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:47.796 [2024-11-28 02:23:21.433122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:47.797 [2024-11-28 02:23:21.433161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.797 [2024-11-28 02:23:21.434853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.797 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.056 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.056 "name": "Existed_Raid", 00:07:48.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.056 "strip_size_kb": 64, 00:07:48.056 "state": "configuring", 00:07:48.056 "raid_level": "raid0", 00:07:48.056 "superblock": false, 00:07:48.056 "num_base_bdevs": 3, 00:07:48.056 "num_base_bdevs_discovered": 2, 00:07:48.056 "num_base_bdevs_operational": 3, 00:07:48.056 "base_bdevs_list": [ 00:07:48.056 { 00:07:48.056 "name": "BaseBdev1", 00:07:48.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.056 "is_configured": false, 00:07:48.056 "data_offset": 0, 00:07:48.056 "data_size": 0 00:07:48.056 }, 00:07:48.056 { 00:07:48.056 "name": "BaseBdev2", 00:07:48.056 "uuid": "bb805a21-da29-4546-8f89-178f17bd5a29", 00:07:48.056 "is_configured": true, 00:07:48.056 "data_offset": 0, 00:07:48.056 "data_size": 65536 00:07:48.056 }, 00:07:48.056 { 00:07:48.056 "name": "BaseBdev3", 00:07:48.056 "uuid": "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7", 00:07:48.056 "is_configured": true, 00:07:48.056 "data_offset": 0, 00:07:48.056 "data_size": 65536 00:07:48.056 } 00:07:48.056 ] 00:07:48.056 }' 00:07:48.056 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.056 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.316 [2024-11-28 02:23:21.840366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.316 "name": "Existed_Raid", 00:07:48.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.316 "strip_size_kb": 64, 00:07:48.316 "state": "configuring", 00:07:48.316 "raid_level": "raid0", 00:07:48.316 "superblock": false, 00:07:48.316 "num_base_bdevs": 3, 00:07:48.316 "num_base_bdevs_discovered": 1, 00:07:48.316 "num_base_bdevs_operational": 3, 00:07:48.316 "base_bdevs_list": [ 00:07:48.316 { 00:07:48.316 "name": "BaseBdev1", 00:07:48.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.316 "is_configured": false, 00:07:48.316 "data_offset": 0, 00:07:48.316 "data_size": 0 00:07:48.316 }, 00:07:48.316 { 00:07:48.316 "name": null, 00:07:48.316 "uuid": "bb805a21-da29-4546-8f89-178f17bd5a29", 00:07:48.316 "is_configured": false, 00:07:48.316 "data_offset": 0, 00:07:48.316 "data_size": 65536 00:07:48.316 }, 00:07:48.316 { 00:07:48.316 "name": "BaseBdev3", 00:07:48.316 "uuid": "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7", 00:07:48.316 "is_configured": true, 00:07:48.316 "data_offset": 0, 00:07:48.316 "data_size": 65536 00:07:48.316 } 00:07:48.316 ] 00:07:48.316 }' 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.316 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 [2024-11-28 02:23:22.354679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.886 BaseBdev1 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 [ 00:07:48.886 { 00:07:48.886 "name": "BaseBdev1", 00:07:48.886 "aliases": [ 00:07:48.886 "dff4568e-3719-4690-a375-885f16c73d70" 00:07:48.886 ], 00:07:48.886 "product_name": "Malloc disk", 00:07:48.886 "block_size": 512, 00:07:48.886 "num_blocks": 65536, 00:07:48.886 "uuid": "dff4568e-3719-4690-a375-885f16c73d70", 00:07:48.886 "assigned_rate_limits": { 00:07:48.886 "rw_ios_per_sec": 0, 00:07:48.886 "rw_mbytes_per_sec": 0, 00:07:48.886 "r_mbytes_per_sec": 0, 00:07:48.886 "w_mbytes_per_sec": 0 00:07:48.886 }, 00:07:48.886 "claimed": true, 00:07:48.886 "claim_type": "exclusive_write", 00:07:48.886 "zoned": false, 00:07:48.886 "supported_io_types": { 00:07:48.886 "read": true, 00:07:48.886 "write": true, 00:07:48.886 "unmap": true, 00:07:48.886 "flush": true, 00:07:48.886 "reset": true, 00:07:48.886 "nvme_admin": false, 00:07:48.886 "nvme_io": false, 00:07:48.886 "nvme_io_md": false, 00:07:48.886 "write_zeroes": true, 00:07:48.886 "zcopy": true, 00:07:48.886 "get_zone_info": false, 00:07:48.886 "zone_management": false, 00:07:48.886 "zone_append": false, 00:07:48.886 "compare": false, 00:07:48.886 "compare_and_write": false, 00:07:48.886 "abort": true, 00:07:48.886 "seek_hole": false, 00:07:48.886 "seek_data": false, 00:07:48.886 "copy": true, 00:07:48.886 "nvme_iov_md": false 00:07:48.886 }, 00:07:48.886 "memory_domains": [ 00:07:48.886 { 00:07:48.886 "dma_device_id": "system", 00:07:48.886 "dma_device_type": 1 00:07:48.886 }, 00:07:48.886 { 00:07:48.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.886 "dma_device_type": 2 00:07:48.886 } 00:07:48.886 ], 00:07:48.886 "driver_specific": {} 00:07:48.886 } 00:07:48.886 ] 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.886 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.886 "name": "Existed_Raid", 00:07:48.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.887 "strip_size_kb": 64, 00:07:48.887 "state": "configuring", 00:07:48.887 "raid_level": "raid0", 00:07:48.887 "superblock": false, 00:07:48.887 "num_base_bdevs": 3, 00:07:48.887 "num_base_bdevs_discovered": 2, 00:07:48.887 "num_base_bdevs_operational": 3, 00:07:48.887 "base_bdevs_list": [ 00:07:48.887 { 00:07:48.887 "name": "BaseBdev1", 00:07:48.887 "uuid": "dff4568e-3719-4690-a375-885f16c73d70", 00:07:48.887 "is_configured": true, 00:07:48.887 "data_offset": 0, 00:07:48.887 "data_size": 65536 00:07:48.887 }, 00:07:48.887 { 00:07:48.887 "name": null, 00:07:48.887 "uuid": "bb805a21-da29-4546-8f89-178f17bd5a29", 00:07:48.887 "is_configured": false, 00:07:48.887 "data_offset": 0, 00:07:48.887 "data_size": 65536 00:07:48.887 }, 00:07:48.887 { 00:07:48.887 "name": "BaseBdev3", 00:07:48.887 "uuid": "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7", 00:07:48.887 "is_configured": true, 00:07:48.887 "data_offset": 0, 00:07:48.887 "data_size": 65536 00:07:48.887 } 00:07:48.887 ] 00:07:48.887 }' 00:07:48.887 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.887 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.146 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.146 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.146 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.146 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:49.146 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.406 [2024-11-28 02:23:22.861868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.406 "name": "Existed_Raid", 00:07:49.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.406 "strip_size_kb": 64, 00:07:49.406 "state": "configuring", 00:07:49.406 "raid_level": "raid0", 00:07:49.406 "superblock": false, 00:07:49.406 "num_base_bdevs": 3, 00:07:49.406 "num_base_bdevs_discovered": 1, 00:07:49.406 "num_base_bdevs_operational": 3, 00:07:49.406 "base_bdevs_list": [ 00:07:49.406 { 00:07:49.406 "name": "BaseBdev1", 00:07:49.406 "uuid": "dff4568e-3719-4690-a375-885f16c73d70", 00:07:49.406 "is_configured": true, 00:07:49.406 "data_offset": 0, 00:07:49.406 "data_size": 65536 00:07:49.406 }, 00:07:49.406 { 00:07:49.406 "name": null, 00:07:49.406 "uuid": "bb805a21-da29-4546-8f89-178f17bd5a29", 00:07:49.406 "is_configured": false, 00:07:49.406 "data_offset": 0, 00:07:49.406 "data_size": 65536 00:07:49.406 }, 00:07:49.406 { 00:07:49.406 "name": null, 00:07:49.406 "uuid": "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7", 00:07:49.406 "is_configured": false, 00:07:49.406 "data_offset": 0, 00:07:49.406 "data_size": 65536 00:07:49.406 } 00:07:49.406 ] 00:07:49.406 }' 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.406 02:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.666 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.666 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.666 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.666 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:49.666 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.926 [2024-11-28 02:23:23.384996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.926 "name": "Existed_Raid", 00:07:49.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.926 "strip_size_kb": 64, 00:07:49.926 "state": "configuring", 00:07:49.926 "raid_level": "raid0", 00:07:49.926 "superblock": false, 00:07:49.926 "num_base_bdevs": 3, 00:07:49.926 "num_base_bdevs_discovered": 2, 00:07:49.926 "num_base_bdevs_operational": 3, 00:07:49.926 "base_bdevs_list": [ 00:07:49.926 { 00:07:49.926 "name": "BaseBdev1", 00:07:49.926 "uuid": "dff4568e-3719-4690-a375-885f16c73d70", 00:07:49.926 "is_configured": true, 00:07:49.926 "data_offset": 0, 00:07:49.926 "data_size": 65536 00:07:49.926 }, 00:07:49.926 { 00:07:49.926 "name": null, 00:07:49.926 "uuid": "bb805a21-da29-4546-8f89-178f17bd5a29", 00:07:49.926 "is_configured": false, 00:07:49.926 "data_offset": 0, 00:07:49.926 "data_size": 65536 00:07:49.926 }, 00:07:49.926 { 00:07:49.926 "name": "BaseBdev3", 00:07:49.926 "uuid": "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7", 00:07:49.926 "is_configured": true, 00:07:49.926 "data_offset": 0, 00:07:49.926 "data_size": 65536 00:07:49.926 } 00:07:49.926 ] 00:07:49.926 }' 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.926 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.186 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.186 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:50.186 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.186 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:50.186 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:50.186 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.186 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.186 [2024-11-28 02:23:23.860200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.452 "name": "Existed_Raid", 00:07:50.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.452 "strip_size_kb": 64, 00:07:50.452 "state": "configuring", 00:07:50.452 "raid_level": "raid0", 00:07:50.452 "superblock": false, 00:07:50.452 "num_base_bdevs": 3, 00:07:50.452 "num_base_bdevs_discovered": 1, 00:07:50.452 "num_base_bdevs_operational": 3, 00:07:50.452 "base_bdevs_list": [ 00:07:50.452 { 00:07:50.452 "name": null, 00:07:50.452 "uuid": "dff4568e-3719-4690-a375-885f16c73d70", 00:07:50.452 "is_configured": false, 00:07:50.452 "data_offset": 0, 00:07:50.452 "data_size": 65536 00:07:50.452 }, 00:07:50.452 { 00:07:50.452 "name": null, 00:07:50.452 "uuid": "bb805a21-da29-4546-8f89-178f17bd5a29", 00:07:50.452 "is_configured": false, 00:07:50.452 "data_offset": 0, 00:07:50.452 "data_size": 65536 00:07:50.452 }, 00:07:50.452 { 00:07:50.452 "name": "BaseBdev3", 00:07:50.452 "uuid": "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7", 00:07:50.452 "is_configured": true, 00:07:50.452 "data_offset": 0, 00:07:50.452 "data_size": 65536 00:07:50.452 } 00:07:50.452 ] 00:07:50.452 }' 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.452 02:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.723 [2024-11-28 02:23:24.372686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.723 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.983 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.983 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.983 "name": "Existed_Raid", 00:07:50.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.983 "strip_size_kb": 64, 00:07:50.983 "state": "configuring", 00:07:50.983 "raid_level": "raid0", 00:07:50.983 "superblock": false, 00:07:50.983 "num_base_bdevs": 3, 00:07:50.983 "num_base_bdevs_discovered": 2, 00:07:50.983 "num_base_bdevs_operational": 3, 00:07:50.983 "base_bdevs_list": [ 00:07:50.983 { 00:07:50.983 "name": null, 00:07:50.983 "uuid": "dff4568e-3719-4690-a375-885f16c73d70", 00:07:50.983 "is_configured": false, 00:07:50.983 "data_offset": 0, 00:07:50.983 "data_size": 65536 00:07:50.983 }, 00:07:50.983 { 00:07:50.983 "name": "BaseBdev2", 00:07:50.983 "uuid": "bb805a21-da29-4546-8f89-178f17bd5a29", 00:07:50.983 "is_configured": true, 00:07:50.983 "data_offset": 0, 00:07:50.983 "data_size": 65536 00:07:50.983 }, 00:07:50.983 { 00:07:50.983 "name": "BaseBdev3", 00:07:50.983 "uuid": "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7", 00:07:50.983 "is_configured": true, 00:07:50.983 "data_offset": 0, 00:07:50.983 "data_size": 65536 00:07:50.983 } 00:07:50.983 ] 00:07:50.983 }' 00:07:50.983 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.983 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dff4568e-3719-4690-a375-885f16c73d70 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.244 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.504 [2024-11-28 02:23:24.935067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:51.504 [2024-11-28 02:23:24.935174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:51.504 [2024-11-28 02:23:24.935189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:51.504 [2024-11-28 02:23:24.935463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:51.504 [2024-11-28 02:23:24.935615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:51.504 [2024-11-28 02:23:24.935624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:51.504 [2024-11-28 02:23:24.935871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.504 NewBaseBdev 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.504 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.504 [ 00:07:51.504 { 00:07:51.504 "name": "NewBaseBdev", 00:07:51.504 "aliases": [ 00:07:51.504 "dff4568e-3719-4690-a375-885f16c73d70" 00:07:51.504 ], 00:07:51.504 "product_name": "Malloc disk", 00:07:51.504 "block_size": 512, 00:07:51.504 "num_blocks": 65536, 00:07:51.504 "uuid": "dff4568e-3719-4690-a375-885f16c73d70", 00:07:51.504 "assigned_rate_limits": { 00:07:51.504 "rw_ios_per_sec": 0, 00:07:51.504 "rw_mbytes_per_sec": 0, 00:07:51.504 "r_mbytes_per_sec": 0, 00:07:51.504 "w_mbytes_per_sec": 0 00:07:51.504 }, 00:07:51.504 "claimed": true, 00:07:51.504 "claim_type": "exclusive_write", 00:07:51.504 "zoned": false, 00:07:51.504 "supported_io_types": { 00:07:51.504 "read": true, 00:07:51.504 "write": true, 00:07:51.504 "unmap": true, 00:07:51.504 "flush": true, 00:07:51.504 "reset": true, 00:07:51.504 "nvme_admin": false, 00:07:51.504 "nvme_io": false, 00:07:51.504 "nvme_io_md": false, 00:07:51.504 "write_zeroes": true, 00:07:51.504 "zcopy": true, 00:07:51.504 "get_zone_info": false, 00:07:51.504 "zone_management": false, 00:07:51.505 "zone_append": false, 00:07:51.505 "compare": false, 00:07:51.505 "compare_and_write": false, 00:07:51.505 "abort": true, 00:07:51.505 "seek_hole": false, 00:07:51.505 "seek_data": false, 00:07:51.505 "copy": true, 00:07:51.505 "nvme_iov_md": false 00:07:51.505 }, 00:07:51.505 "memory_domains": [ 00:07:51.505 { 00:07:51.505 "dma_device_id": "system", 00:07:51.505 "dma_device_type": 1 00:07:51.505 }, 00:07:51.505 { 00:07:51.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.505 "dma_device_type": 2 00:07:51.505 } 00:07:51.505 ], 00:07:51.505 "driver_specific": {} 00:07:51.505 } 00:07:51.505 ] 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.505 02:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.505 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.505 "name": "Existed_Raid", 00:07:51.505 "uuid": "1dc2bb5d-c9dd-4054-a601-dae61e8fdff2", 00:07:51.505 "strip_size_kb": 64, 00:07:51.505 "state": "online", 00:07:51.505 "raid_level": "raid0", 00:07:51.505 "superblock": false, 00:07:51.505 "num_base_bdevs": 3, 00:07:51.505 "num_base_bdevs_discovered": 3, 00:07:51.505 "num_base_bdevs_operational": 3, 00:07:51.505 "base_bdevs_list": [ 00:07:51.505 { 00:07:51.505 "name": "NewBaseBdev", 00:07:51.505 "uuid": "dff4568e-3719-4690-a375-885f16c73d70", 00:07:51.505 "is_configured": true, 00:07:51.505 "data_offset": 0, 00:07:51.505 "data_size": 65536 00:07:51.505 }, 00:07:51.505 { 00:07:51.505 "name": "BaseBdev2", 00:07:51.505 "uuid": "bb805a21-da29-4546-8f89-178f17bd5a29", 00:07:51.505 "is_configured": true, 00:07:51.505 "data_offset": 0, 00:07:51.505 "data_size": 65536 00:07:51.505 }, 00:07:51.505 { 00:07:51.505 "name": "BaseBdev3", 00:07:51.505 "uuid": "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7", 00:07:51.505 "is_configured": true, 00:07:51.505 "data_offset": 0, 00:07:51.505 "data_size": 65536 00:07:51.505 } 00:07:51.505 ] 00:07:51.505 }' 00:07:51.505 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.505 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.764 [2024-11-28 02:23:25.422540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.764 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.025 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.025 "name": "Existed_Raid", 00:07:52.025 "aliases": [ 00:07:52.025 "1dc2bb5d-c9dd-4054-a601-dae61e8fdff2" 00:07:52.025 ], 00:07:52.025 "product_name": "Raid Volume", 00:07:52.025 "block_size": 512, 00:07:52.025 "num_blocks": 196608, 00:07:52.025 "uuid": "1dc2bb5d-c9dd-4054-a601-dae61e8fdff2", 00:07:52.025 "assigned_rate_limits": { 00:07:52.025 "rw_ios_per_sec": 0, 00:07:52.025 "rw_mbytes_per_sec": 0, 00:07:52.025 "r_mbytes_per_sec": 0, 00:07:52.025 "w_mbytes_per_sec": 0 00:07:52.025 }, 00:07:52.025 "claimed": false, 00:07:52.025 "zoned": false, 00:07:52.025 "supported_io_types": { 00:07:52.025 "read": true, 00:07:52.025 "write": true, 00:07:52.025 "unmap": true, 00:07:52.025 "flush": true, 00:07:52.025 "reset": true, 00:07:52.025 "nvme_admin": false, 00:07:52.025 "nvme_io": false, 00:07:52.025 "nvme_io_md": false, 00:07:52.025 "write_zeroes": true, 00:07:52.025 "zcopy": false, 00:07:52.025 "get_zone_info": false, 00:07:52.025 "zone_management": false, 00:07:52.025 "zone_append": false, 00:07:52.025 "compare": false, 00:07:52.025 "compare_and_write": false, 00:07:52.025 "abort": false, 00:07:52.025 "seek_hole": false, 00:07:52.025 "seek_data": false, 00:07:52.025 "copy": false, 00:07:52.025 "nvme_iov_md": false 00:07:52.025 }, 00:07:52.025 "memory_domains": [ 00:07:52.025 { 00:07:52.025 "dma_device_id": "system", 00:07:52.025 "dma_device_type": 1 00:07:52.025 }, 00:07:52.025 { 00:07:52.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.025 "dma_device_type": 2 00:07:52.025 }, 00:07:52.025 { 00:07:52.025 "dma_device_id": "system", 00:07:52.025 "dma_device_type": 1 00:07:52.025 }, 00:07:52.025 { 00:07:52.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.025 "dma_device_type": 2 00:07:52.025 }, 00:07:52.025 { 00:07:52.025 "dma_device_id": "system", 00:07:52.025 "dma_device_type": 1 00:07:52.025 }, 00:07:52.025 { 00:07:52.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.025 "dma_device_type": 2 00:07:52.025 } 00:07:52.025 ], 00:07:52.025 "driver_specific": { 00:07:52.025 "raid": { 00:07:52.025 "uuid": "1dc2bb5d-c9dd-4054-a601-dae61e8fdff2", 00:07:52.025 "strip_size_kb": 64, 00:07:52.025 "state": "online", 00:07:52.025 "raid_level": "raid0", 00:07:52.025 "superblock": false, 00:07:52.025 "num_base_bdevs": 3, 00:07:52.025 "num_base_bdevs_discovered": 3, 00:07:52.025 "num_base_bdevs_operational": 3, 00:07:52.025 "base_bdevs_list": [ 00:07:52.025 { 00:07:52.025 "name": "NewBaseBdev", 00:07:52.025 "uuid": "dff4568e-3719-4690-a375-885f16c73d70", 00:07:52.025 "is_configured": true, 00:07:52.025 "data_offset": 0, 00:07:52.025 "data_size": 65536 00:07:52.025 }, 00:07:52.025 { 00:07:52.025 "name": "BaseBdev2", 00:07:52.025 "uuid": "bb805a21-da29-4546-8f89-178f17bd5a29", 00:07:52.025 "is_configured": true, 00:07:52.025 "data_offset": 0, 00:07:52.025 "data_size": 65536 00:07:52.025 }, 00:07:52.025 { 00:07:52.025 "name": "BaseBdev3", 00:07:52.025 "uuid": "c4dbf7bc-114b-4d0f-b2b5-0afa21cd18f7", 00:07:52.025 "is_configured": true, 00:07:52.025 "data_offset": 0, 00:07:52.026 "data_size": 65536 00:07:52.026 } 00:07:52.026 ] 00:07:52.026 } 00:07:52.026 } 00:07:52.026 }' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:52.026 BaseBdev2 00:07:52.026 BaseBdev3' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.026 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.026 [2024-11-28 02:23:25.701772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.026 [2024-11-28 02:23:25.701797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.026 [2024-11-28 02:23:25.701883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.026 [2024-11-28 02:23:25.701949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.026 [2024-11-28 02:23:25.701963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63641 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63641 ']' 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63641 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63641 00:07:52.286 killing process with pid 63641 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63641' 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63641 00:07:52.286 [2024-11-28 02:23:25.750039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.286 02:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63641 00:07:52.546 [2024-11-28 02:23:26.034932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.486 02:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:53.486 00:07:53.486 real 0m10.480s 00:07:53.486 user 0m16.782s 00:07:53.486 sys 0m1.748s 00:07:53.486 ************************************ 00:07:53.486 END TEST raid_state_function_test 00:07:53.486 ************************************ 00:07:53.486 02:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.486 02:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.486 02:23:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:53.486 02:23:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:53.486 02:23:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.486 02:23:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.747 ************************************ 00:07:53.747 START TEST raid_state_function_test_sb 00:07:53.747 ************************************ 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:53.747 Process raid pid: 64262 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64262 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64262' 00:07:53.747 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64262 00:07:53.748 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64262 ']' 00:07:53.748 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.748 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.748 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.748 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.748 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.748 [2024-11-28 02:23:27.268728] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:53.748 [2024-11-28 02:23:27.269395] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.008 [2024-11-28 02:23:27.435637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.008 [2024-11-28 02:23:27.535205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.268 [2024-11-28 02:23:27.726612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.268 [2024-11-28 02:23:27.726694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.529 [2024-11-28 02:23:28.105911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:54.529 [2024-11-28 02:23:28.106179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:54.529 [2024-11-28 02:23:28.106227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.529 [2024-11-28 02:23:28.106313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.529 [2024-11-28 02:23:28.106346] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:54.529 [2024-11-28 02:23:28.106415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.529 "name": "Existed_Raid", 00:07:54.529 "uuid": "04d69f2e-a463-4b2b-95f1-8a93a832de71", 00:07:54.529 "strip_size_kb": 64, 00:07:54.529 "state": "configuring", 00:07:54.529 "raid_level": "raid0", 00:07:54.529 "superblock": true, 00:07:54.529 "num_base_bdevs": 3, 00:07:54.529 "num_base_bdevs_discovered": 0, 00:07:54.529 "num_base_bdevs_operational": 3, 00:07:54.529 "base_bdevs_list": [ 00:07:54.529 { 00:07:54.529 "name": "BaseBdev1", 00:07:54.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.529 "is_configured": false, 00:07:54.529 "data_offset": 0, 00:07:54.529 "data_size": 0 00:07:54.529 }, 00:07:54.529 { 00:07:54.529 "name": "BaseBdev2", 00:07:54.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.529 "is_configured": false, 00:07:54.529 "data_offset": 0, 00:07:54.529 "data_size": 0 00:07:54.529 }, 00:07:54.529 { 00:07:54.529 "name": "BaseBdev3", 00:07:54.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.529 "is_configured": false, 00:07:54.529 "data_offset": 0, 00:07:54.529 "data_size": 0 00:07:54.529 } 00:07:54.529 ] 00:07:54.529 }' 00:07:54.529 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.530 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.101 [2024-11-28 02:23:28.608996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.101 [2024-11-28 02:23:28.609033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.101 [2024-11-28 02:23:28.620985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.101 [2024-11-28 02:23:28.621283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.101 [2024-11-28 02:23:28.621301] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.101 [2024-11-28 02:23:28.621392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.101 [2024-11-28 02:23:28.621403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:55.101 [2024-11-28 02:23:28.621454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.101 [2024-11-28 02:23:28.662146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.101 BaseBdev1 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.101 [ 00:07:55.101 { 00:07:55.101 "name": "BaseBdev1", 00:07:55.101 "aliases": [ 00:07:55.101 "7c98e01f-6c0c-40cf-a036-e9c756db6005" 00:07:55.101 ], 00:07:55.101 "product_name": "Malloc disk", 00:07:55.101 "block_size": 512, 00:07:55.101 "num_blocks": 65536, 00:07:55.101 "uuid": "7c98e01f-6c0c-40cf-a036-e9c756db6005", 00:07:55.101 "assigned_rate_limits": { 00:07:55.101 "rw_ios_per_sec": 0, 00:07:55.101 "rw_mbytes_per_sec": 0, 00:07:55.101 "r_mbytes_per_sec": 0, 00:07:55.101 "w_mbytes_per_sec": 0 00:07:55.101 }, 00:07:55.101 "claimed": true, 00:07:55.101 "claim_type": "exclusive_write", 00:07:55.101 "zoned": false, 00:07:55.101 "supported_io_types": { 00:07:55.101 "read": true, 00:07:55.101 "write": true, 00:07:55.101 "unmap": true, 00:07:55.101 "flush": true, 00:07:55.101 "reset": true, 00:07:55.101 "nvme_admin": false, 00:07:55.101 "nvme_io": false, 00:07:55.101 "nvme_io_md": false, 00:07:55.101 "write_zeroes": true, 00:07:55.101 "zcopy": true, 00:07:55.101 "get_zone_info": false, 00:07:55.101 "zone_management": false, 00:07:55.101 "zone_append": false, 00:07:55.101 "compare": false, 00:07:55.101 "compare_and_write": false, 00:07:55.101 "abort": true, 00:07:55.101 "seek_hole": false, 00:07:55.101 "seek_data": false, 00:07:55.101 "copy": true, 00:07:55.101 "nvme_iov_md": false 00:07:55.101 }, 00:07:55.101 "memory_domains": [ 00:07:55.101 { 00:07:55.101 "dma_device_id": "system", 00:07:55.101 "dma_device_type": 1 00:07:55.101 }, 00:07:55.101 { 00:07:55.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.101 "dma_device_type": 2 00:07:55.101 } 00:07:55.101 ], 00:07:55.101 "driver_specific": {} 00:07:55.101 } 00:07:55.101 ] 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:55.101 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.102 "name": "Existed_Raid", 00:07:55.102 "uuid": "6206082c-d9be-417d-aae3-47dd3603d017", 00:07:55.102 "strip_size_kb": 64, 00:07:55.102 "state": "configuring", 00:07:55.102 "raid_level": "raid0", 00:07:55.102 "superblock": true, 00:07:55.102 "num_base_bdevs": 3, 00:07:55.102 "num_base_bdevs_discovered": 1, 00:07:55.102 "num_base_bdevs_operational": 3, 00:07:55.102 "base_bdevs_list": [ 00:07:55.102 { 00:07:55.102 "name": "BaseBdev1", 00:07:55.102 "uuid": "7c98e01f-6c0c-40cf-a036-e9c756db6005", 00:07:55.102 "is_configured": true, 00:07:55.102 "data_offset": 2048, 00:07:55.102 "data_size": 63488 00:07:55.102 }, 00:07:55.102 { 00:07:55.102 "name": "BaseBdev2", 00:07:55.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.102 "is_configured": false, 00:07:55.102 "data_offset": 0, 00:07:55.102 "data_size": 0 00:07:55.102 }, 00:07:55.102 { 00:07:55.102 "name": "BaseBdev3", 00:07:55.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.102 "is_configured": false, 00:07:55.102 "data_offset": 0, 00:07:55.102 "data_size": 0 00:07:55.102 } 00:07:55.102 ] 00:07:55.102 }' 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.102 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.674 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.674 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.674 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.674 [2024-11-28 02:23:29.117413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.674 [2024-11-28 02:23:29.117478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:55.674 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.674 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:55.674 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.674 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.674 [2024-11-28 02:23:29.129428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.674 [2024-11-28 02:23:29.131257] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.674 [2024-11-28 02:23:29.131480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.674 [2024-11-28 02:23:29.131528] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:55.674 [2024-11-28 02:23:29.131602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:55.674 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.674 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:55.674 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.675 "name": "Existed_Raid", 00:07:55.675 "uuid": "84aace1a-43d0-4b1f-991b-733cf70601ee", 00:07:55.675 "strip_size_kb": 64, 00:07:55.675 "state": "configuring", 00:07:55.675 "raid_level": "raid0", 00:07:55.675 "superblock": true, 00:07:55.675 "num_base_bdevs": 3, 00:07:55.675 "num_base_bdevs_discovered": 1, 00:07:55.675 "num_base_bdevs_operational": 3, 00:07:55.675 "base_bdevs_list": [ 00:07:55.675 { 00:07:55.675 "name": "BaseBdev1", 00:07:55.675 "uuid": "7c98e01f-6c0c-40cf-a036-e9c756db6005", 00:07:55.675 "is_configured": true, 00:07:55.675 "data_offset": 2048, 00:07:55.675 "data_size": 63488 00:07:55.675 }, 00:07:55.675 { 00:07:55.675 "name": "BaseBdev2", 00:07:55.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.675 "is_configured": false, 00:07:55.675 "data_offset": 0, 00:07:55.675 "data_size": 0 00:07:55.675 }, 00:07:55.675 { 00:07:55.675 "name": "BaseBdev3", 00:07:55.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.675 "is_configured": false, 00:07:55.675 "data_offset": 0, 00:07:55.675 "data_size": 0 00:07:55.675 } 00:07:55.675 ] 00:07:55.675 }' 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.675 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.935 [2024-11-28 02:23:29.605011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.935 BaseBdev2 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.935 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.195 [ 00:07:56.195 { 00:07:56.195 "name": "BaseBdev2", 00:07:56.195 "aliases": [ 00:07:56.195 "822f22d0-36a8-4c7d-a37c-2a00c8122425" 00:07:56.195 ], 00:07:56.195 "product_name": "Malloc disk", 00:07:56.195 "block_size": 512, 00:07:56.195 "num_blocks": 65536, 00:07:56.195 "uuid": "822f22d0-36a8-4c7d-a37c-2a00c8122425", 00:07:56.195 "assigned_rate_limits": { 00:07:56.195 "rw_ios_per_sec": 0, 00:07:56.195 "rw_mbytes_per_sec": 0, 00:07:56.195 "r_mbytes_per_sec": 0, 00:07:56.195 "w_mbytes_per_sec": 0 00:07:56.195 }, 00:07:56.195 "claimed": true, 00:07:56.195 "claim_type": "exclusive_write", 00:07:56.195 "zoned": false, 00:07:56.195 "supported_io_types": { 00:07:56.195 "read": true, 00:07:56.195 "write": true, 00:07:56.195 "unmap": true, 00:07:56.195 "flush": true, 00:07:56.195 "reset": true, 00:07:56.195 "nvme_admin": false, 00:07:56.195 "nvme_io": false, 00:07:56.195 "nvme_io_md": false, 00:07:56.195 "write_zeroes": true, 00:07:56.195 "zcopy": true, 00:07:56.195 "get_zone_info": false, 00:07:56.195 "zone_management": false, 00:07:56.195 "zone_append": false, 00:07:56.195 "compare": false, 00:07:56.195 "compare_and_write": false, 00:07:56.195 "abort": true, 00:07:56.195 "seek_hole": false, 00:07:56.195 "seek_data": false, 00:07:56.195 "copy": true, 00:07:56.195 "nvme_iov_md": false 00:07:56.195 }, 00:07:56.195 "memory_domains": [ 00:07:56.195 { 00:07:56.195 "dma_device_id": "system", 00:07:56.195 "dma_device_type": 1 00:07:56.195 }, 00:07:56.195 { 00:07:56.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.195 "dma_device_type": 2 00:07:56.195 } 00:07:56.195 ], 00:07:56.195 "driver_specific": {} 00:07:56.195 } 00:07:56.195 ] 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.195 "name": "Existed_Raid", 00:07:56.195 "uuid": "84aace1a-43d0-4b1f-991b-733cf70601ee", 00:07:56.195 "strip_size_kb": 64, 00:07:56.195 "state": "configuring", 00:07:56.195 "raid_level": "raid0", 00:07:56.195 "superblock": true, 00:07:56.195 "num_base_bdevs": 3, 00:07:56.195 "num_base_bdevs_discovered": 2, 00:07:56.195 "num_base_bdevs_operational": 3, 00:07:56.195 "base_bdevs_list": [ 00:07:56.195 { 00:07:56.195 "name": "BaseBdev1", 00:07:56.195 "uuid": "7c98e01f-6c0c-40cf-a036-e9c756db6005", 00:07:56.195 "is_configured": true, 00:07:56.195 "data_offset": 2048, 00:07:56.195 "data_size": 63488 00:07:56.195 }, 00:07:56.195 { 00:07:56.195 "name": "BaseBdev2", 00:07:56.195 "uuid": "822f22d0-36a8-4c7d-a37c-2a00c8122425", 00:07:56.195 "is_configured": true, 00:07:56.195 "data_offset": 2048, 00:07:56.195 "data_size": 63488 00:07:56.195 }, 00:07:56.195 { 00:07:56.195 "name": "BaseBdev3", 00:07:56.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.195 "is_configured": false, 00:07:56.195 "data_offset": 0, 00:07:56.195 "data_size": 0 00:07:56.195 } 00:07:56.195 ] 00:07:56.195 }' 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.195 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.455 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:56.455 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.455 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.715 [2024-11-28 02:23:30.134408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:56.715 [2024-11-28 02:23:30.134658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:56.715 [2024-11-28 02:23:30.134679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:56.715 [2024-11-28 02:23:30.134952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:56.715 [2024-11-28 02:23:30.135116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:56.715 [2024-11-28 02:23:30.135127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:56.715 [2024-11-28 02:23:30.135305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.715 BaseBdev3 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.715 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.715 [ 00:07:56.715 { 00:07:56.715 "name": "BaseBdev3", 00:07:56.715 "aliases": [ 00:07:56.715 "2b47e74a-45bb-474c-99ff-9c5170f3e4c3" 00:07:56.715 ], 00:07:56.715 "product_name": "Malloc disk", 00:07:56.715 "block_size": 512, 00:07:56.715 "num_blocks": 65536, 00:07:56.715 "uuid": "2b47e74a-45bb-474c-99ff-9c5170f3e4c3", 00:07:56.715 "assigned_rate_limits": { 00:07:56.715 "rw_ios_per_sec": 0, 00:07:56.715 "rw_mbytes_per_sec": 0, 00:07:56.715 "r_mbytes_per_sec": 0, 00:07:56.715 "w_mbytes_per_sec": 0 00:07:56.715 }, 00:07:56.715 "claimed": true, 00:07:56.715 "claim_type": "exclusive_write", 00:07:56.715 "zoned": false, 00:07:56.715 "supported_io_types": { 00:07:56.715 "read": true, 00:07:56.715 "write": true, 00:07:56.715 "unmap": true, 00:07:56.715 "flush": true, 00:07:56.715 "reset": true, 00:07:56.715 "nvme_admin": false, 00:07:56.715 "nvme_io": false, 00:07:56.715 "nvme_io_md": false, 00:07:56.715 "write_zeroes": true, 00:07:56.715 "zcopy": true, 00:07:56.715 "get_zone_info": false, 00:07:56.715 "zone_management": false, 00:07:56.715 "zone_append": false, 00:07:56.715 "compare": false, 00:07:56.715 "compare_and_write": false, 00:07:56.715 "abort": true, 00:07:56.715 "seek_hole": false, 00:07:56.715 "seek_data": false, 00:07:56.715 "copy": true, 00:07:56.715 "nvme_iov_md": false 00:07:56.715 }, 00:07:56.715 "memory_domains": [ 00:07:56.715 { 00:07:56.715 "dma_device_id": "system", 00:07:56.716 "dma_device_type": 1 00:07:56.716 }, 00:07:56.716 { 00:07:56.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.716 "dma_device_type": 2 00:07:56.716 } 00:07:56.716 ], 00:07:56.716 "driver_specific": {} 00:07:56.716 } 00:07:56.716 ] 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.716 "name": "Existed_Raid", 00:07:56.716 "uuid": "84aace1a-43d0-4b1f-991b-733cf70601ee", 00:07:56.716 "strip_size_kb": 64, 00:07:56.716 "state": "online", 00:07:56.716 "raid_level": "raid0", 00:07:56.716 "superblock": true, 00:07:56.716 "num_base_bdevs": 3, 00:07:56.716 "num_base_bdevs_discovered": 3, 00:07:56.716 "num_base_bdevs_operational": 3, 00:07:56.716 "base_bdevs_list": [ 00:07:56.716 { 00:07:56.716 "name": "BaseBdev1", 00:07:56.716 "uuid": "7c98e01f-6c0c-40cf-a036-e9c756db6005", 00:07:56.716 "is_configured": true, 00:07:56.716 "data_offset": 2048, 00:07:56.716 "data_size": 63488 00:07:56.716 }, 00:07:56.716 { 00:07:56.716 "name": "BaseBdev2", 00:07:56.716 "uuid": "822f22d0-36a8-4c7d-a37c-2a00c8122425", 00:07:56.716 "is_configured": true, 00:07:56.716 "data_offset": 2048, 00:07:56.716 "data_size": 63488 00:07:56.716 }, 00:07:56.716 { 00:07:56.716 "name": "BaseBdev3", 00:07:56.716 "uuid": "2b47e74a-45bb-474c-99ff-9c5170f3e4c3", 00:07:56.716 "is_configured": true, 00:07:56.716 "data_offset": 2048, 00:07:56.716 "data_size": 63488 00:07:56.716 } 00:07:56.716 ] 00:07:56.716 }' 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.716 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.976 [2024-11-28 02:23:30.605945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.976 "name": "Existed_Raid", 00:07:56.976 "aliases": [ 00:07:56.976 "84aace1a-43d0-4b1f-991b-733cf70601ee" 00:07:56.976 ], 00:07:56.976 "product_name": "Raid Volume", 00:07:56.976 "block_size": 512, 00:07:56.976 "num_blocks": 190464, 00:07:56.976 "uuid": "84aace1a-43d0-4b1f-991b-733cf70601ee", 00:07:56.976 "assigned_rate_limits": { 00:07:56.976 "rw_ios_per_sec": 0, 00:07:56.976 "rw_mbytes_per_sec": 0, 00:07:56.976 "r_mbytes_per_sec": 0, 00:07:56.976 "w_mbytes_per_sec": 0 00:07:56.976 }, 00:07:56.976 "claimed": false, 00:07:56.976 "zoned": false, 00:07:56.976 "supported_io_types": { 00:07:56.976 "read": true, 00:07:56.976 "write": true, 00:07:56.976 "unmap": true, 00:07:56.976 "flush": true, 00:07:56.976 "reset": true, 00:07:56.976 "nvme_admin": false, 00:07:56.976 "nvme_io": false, 00:07:56.976 "nvme_io_md": false, 00:07:56.976 "write_zeroes": true, 00:07:56.976 "zcopy": false, 00:07:56.976 "get_zone_info": false, 00:07:56.976 "zone_management": false, 00:07:56.976 "zone_append": false, 00:07:56.976 "compare": false, 00:07:56.976 "compare_and_write": false, 00:07:56.976 "abort": false, 00:07:56.976 "seek_hole": false, 00:07:56.976 "seek_data": false, 00:07:56.976 "copy": false, 00:07:56.976 "nvme_iov_md": false 00:07:56.976 }, 00:07:56.976 "memory_domains": [ 00:07:56.976 { 00:07:56.976 "dma_device_id": "system", 00:07:56.976 "dma_device_type": 1 00:07:56.976 }, 00:07:56.976 { 00:07:56.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.976 "dma_device_type": 2 00:07:56.976 }, 00:07:56.976 { 00:07:56.976 "dma_device_id": "system", 00:07:56.976 "dma_device_type": 1 00:07:56.976 }, 00:07:56.976 { 00:07:56.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.976 "dma_device_type": 2 00:07:56.976 }, 00:07:56.976 { 00:07:56.976 "dma_device_id": "system", 00:07:56.976 "dma_device_type": 1 00:07:56.976 }, 00:07:56.976 { 00:07:56.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.976 "dma_device_type": 2 00:07:56.976 } 00:07:56.976 ], 00:07:56.976 "driver_specific": { 00:07:56.976 "raid": { 00:07:56.976 "uuid": "84aace1a-43d0-4b1f-991b-733cf70601ee", 00:07:56.976 "strip_size_kb": 64, 00:07:56.976 "state": "online", 00:07:56.976 "raid_level": "raid0", 00:07:56.976 "superblock": true, 00:07:56.976 "num_base_bdevs": 3, 00:07:56.976 "num_base_bdevs_discovered": 3, 00:07:56.976 "num_base_bdevs_operational": 3, 00:07:56.976 "base_bdevs_list": [ 00:07:56.976 { 00:07:56.976 "name": "BaseBdev1", 00:07:56.976 "uuid": "7c98e01f-6c0c-40cf-a036-e9c756db6005", 00:07:56.976 "is_configured": true, 00:07:56.976 "data_offset": 2048, 00:07:56.976 "data_size": 63488 00:07:56.976 }, 00:07:56.976 { 00:07:56.976 "name": "BaseBdev2", 00:07:56.976 "uuid": "822f22d0-36a8-4c7d-a37c-2a00c8122425", 00:07:56.976 "is_configured": true, 00:07:56.976 "data_offset": 2048, 00:07:56.976 "data_size": 63488 00:07:56.976 }, 00:07:56.976 { 00:07:56.976 "name": "BaseBdev3", 00:07:56.976 "uuid": "2b47e74a-45bb-474c-99ff-9c5170f3e4c3", 00:07:56.976 "is_configured": true, 00:07:56.976 "data_offset": 2048, 00:07:56.976 "data_size": 63488 00:07:56.976 } 00:07:56.976 ] 00:07:56.976 } 00:07:56.976 } 00:07:56.976 }' 00:07:56.976 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:57.237 BaseBdev2 00:07:57.237 BaseBdev3' 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.237 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.237 [2024-11-28 02:23:30.865212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.237 [2024-11-28 02:23:30.865240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.237 [2024-11-28 02:23:30.865291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.497 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.497 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.497 "name": "Existed_Raid", 00:07:57.497 "uuid": "84aace1a-43d0-4b1f-991b-733cf70601ee", 00:07:57.497 "strip_size_kb": 64, 00:07:57.497 "state": "offline", 00:07:57.497 "raid_level": "raid0", 00:07:57.497 "superblock": true, 00:07:57.497 "num_base_bdevs": 3, 00:07:57.497 "num_base_bdevs_discovered": 2, 00:07:57.497 "num_base_bdevs_operational": 2, 00:07:57.497 "base_bdevs_list": [ 00:07:57.497 { 00:07:57.497 "name": null, 00:07:57.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.497 "is_configured": false, 00:07:57.497 "data_offset": 0, 00:07:57.497 "data_size": 63488 00:07:57.497 }, 00:07:57.497 { 00:07:57.497 "name": "BaseBdev2", 00:07:57.497 "uuid": "822f22d0-36a8-4c7d-a37c-2a00c8122425", 00:07:57.497 "is_configured": true, 00:07:57.497 "data_offset": 2048, 00:07:57.497 "data_size": 63488 00:07:57.497 }, 00:07:57.497 { 00:07:57.497 "name": "BaseBdev3", 00:07:57.497 "uuid": "2b47e74a-45bb-474c-99ff-9c5170f3e4c3", 00:07:57.497 "is_configured": true, 00:07:57.497 "data_offset": 2048, 00:07:57.497 "data_size": 63488 00:07:57.497 } 00:07:57.497 ] 00:07:57.497 }' 00:07:57.497 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.497 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.757 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.017 [2024-11-28 02:23:31.439396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.017 [2024-11-28 02:23:31.586219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:58.017 [2024-11-28 02:23:31.586320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.017 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.278 BaseBdev2 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.278 [ 00:07:58.278 { 00:07:58.278 "name": "BaseBdev2", 00:07:58.278 "aliases": [ 00:07:58.278 "5891b0cb-f81e-4c4b-b931-968839e46ed3" 00:07:58.278 ], 00:07:58.278 "product_name": "Malloc disk", 00:07:58.278 "block_size": 512, 00:07:58.278 "num_blocks": 65536, 00:07:58.278 "uuid": "5891b0cb-f81e-4c4b-b931-968839e46ed3", 00:07:58.278 "assigned_rate_limits": { 00:07:58.278 "rw_ios_per_sec": 0, 00:07:58.278 "rw_mbytes_per_sec": 0, 00:07:58.278 "r_mbytes_per_sec": 0, 00:07:58.278 "w_mbytes_per_sec": 0 00:07:58.278 }, 00:07:58.278 "claimed": false, 00:07:58.278 "zoned": false, 00:07:58.278 "supported_io_types": { 00:07:58.278 "read": true, 00:07:58.278 "write": true, 00:07:58.278 "unmap": true, 00:07:58.278 "flush": true, 00:07:58.278 "reset": true, 00:07:58.278 "nvme_admin": false, 00:07:58.278 "nvme_io": false, 00:07:58.278 "nvme_io_md": false, 00:07:58.278 "write_zeroes": true, 00:07:58.278 "zcopy": true, 00:07:58.278 "get_zone_info": false, 00:07:58.278 "zone_management": false, 00:07:58.278 "zone_append": false, 00:07:58.278 "compare": false, 00:07:58.278 "compare_and_write": false, 00:07:58.278 "abort": true, 00:07:58.278 "seek_hole": false, 00:07:58.278 "seek_data": false, 00:07:58.278 "copy": true, 00:07:58.278 "nvme_iov_md": false 00:07:58.278 }, 00:07:58.278 "memory_domains": [ 00:07:58.278 { 00:07:58.278 "dma_device_id": "system", 00:07:58.278 "dma_device_type": 1 00:07:58.278 }, 00:07:58.278 { 00:07:58.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.278 "dma_device_type": 2 00:07:58.278 } 00:07:58.278 ], 00:07:58.278 "driver_specific": {} 00:07:58.278 } 00:07:58.278 ] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.278 BaseBdev3 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.278 [ 00:07:58.278 { 00:07:58.278 "name": "BaseBdev3", 00:07:58.278 "aliases": [ 00:07:58.278 "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7" 00:07:58.278 ], 00:07:58.278 "product_name": "Malloc disk", 00:07:58.278 "block_size": 512, 00:07:58.278 "num_blocks": 65536, 00:07:58.278 "uuid": "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7", 00:07:58.278 "assigned_rate_limits": { 00:07:58.278 "rw_ios_per_sec": 0, 00:07:58.278 "rw_mbytes_per_sec": 0, 00:07:58.278 "r_mbytes_per_sec": 0, 00:07:58.278 "w_mbytes_per_sec": 0 00:07:58.278 }, 00:07:58.278 "claimed": false, 00:07:58.278 "zoned": false, 00:07:58.278 "supported_io_types": { 00:07:58.278 "read": true, 00:07:58.278 "write": true, 00:07:58.278 "unmap": true, 00:07:58.278 "flush": true, 00:07:58.278 "reset": true, 00:07:58.278 "nvme_admin": false, 00:07:58.278 "nvme_io": false, 00:07:58.278 "nvme_io_md": false, 00:07:58.278 "write_zeroes": true, 00:07:58.278 "zcopy": true, 00:07:58.278 "get_zone_info": false, 00:07:58.278 "zone_management": false, 00:07:58.278 "zone_append": false, 00:07:58.278 "compare": false, 00:07:58.278 "compare_and_write": false, 00:07:58.278 "abort": true, 00:07:58.278 "seek_hole": false, 00:07:58.278 "seek_data": false, 00:07:58.278 "copy": true, 00:07:58.278 "nvme_iov_md": false 00:07:58.278 }, 00:07:58.278 "memory_domains": [ 00:07:58.278 { 00:07:58.278 "dma_device_id": "system", 00:07:58.278 "dma_device_type": 1 00:07:58.278 }, 00:07:58.278 { 00:07:58.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.278 "dma_device_type": 2 00:07:58.278 } 00:07:58.278 ], 00:07:58.278 "driver_specific": {} 00:07:58.278 } 00:07:58.278 ] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.278 [2024-11-28 02:23:31.891500] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.278 [2024-11-28 02:23:31.891914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.278 [2024-11-28 02:23:31.892002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.278 [2024-11-28 02:23:31.893915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.278 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.279 "name": "Existed_Raid", 00:07:58.279 "uuid": "bade4b0e-07b5-43de-907c-5cb969f2b6ba", 00:07:58.279 "strip_size_kb": 64, 00:07:58.279 "state": "configuring", 00:07:58.279 "raid_level": "raid0", 00:07:58.279 "superblock": true, 00:07:58.279 "num_base_bdevs": 3, 00:07:58.279 "num_base_bdevs_discovered": 2, 00:07:58.279 "num_base_bdevs_operational": 3, 00:07:58.279 "base_bdevs_list": [ 00:07:58.279 { 00:07:58.279 "name": "BaseBdev1", 00:07:58.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.279 "is_configured": false, 00:07:58.279 "data_offset": 0, 00:07:58.279 "data_size": 0 00:07:58.279 }, 00:07:58.279 { 00:07:58.279 "name": "BaseBdev2", 00:07:58.279 "uuid": "5891b0cb-f81e-4c4b-b931-968839e46ed3", 00:07:58.279 "is_configured": true, 00:07:58.279 "data_offset": 2048, 00:07:58.279 "data_size": 63488 00:07:58.279 }, 00:07:58.279 { 00:07:58.279 "name": "BaseBdev3", 00:07:58.279 "uuid": "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7", 00:07:58.279 "is_configured": true, 00:07:58.279 "data_offset": 2048, 00:07:58.279 "data_size": 63488 00:07:58.279 } 00:07:58.279 ] 00:07:58.279 }' 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.279 02:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.847 [2024-11-28 02:23:32.306828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.847 "name": "Existed_Raid", 00:07:58.847 "uuid": "bade4b0e-07b5-43de-907c-5cb969f2b6ba", 00:07:58.847 "strip_size_kb": 64, 00:07:58.847 "state": "configuring", 00:07:58.847 "raid_level": "raid0", 00:07:58.847 "superblock": true, 00:07:58.847 "num_base_bdevs": 3, 00:07:58.847 "num_base_bdevs_discovered": 1, 00:07:58.847 "num_base_bdevs_operational": 3, 00:07:58.847 "base_bdevs_list": [ 00:07:58.847 { 00:07:58.847 "name": "BaseBdev1", 00:07:58.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.847 "is_configured": false, 00:07:58.847 "data_offset": 0, 00:07:58.847 "data_size": 0 00:07:58.847 }, 00:07:58.847 { 00:07:58.847 "name": null, 00:07:58.847 "uuid": "5891b0cb-f81e-4c4b-b931-968839e46ed3", 00:07:58.847 "is_configured": false, 00:07:58.847 "data_offset": 0, 00:07:58.847 "data_size": 63488 00:07:58.847 }, 00:07:58.847 { 00:07:58.847 "name": "BaseBdev3", 00:07:58.847 "uuid": "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7", 00:07:58.847 "is_configured": true, 00:07:58.847 "data_offset": 2048, 00:07:58.847 "data_size": 63488 00:07:58.847 } 00:07:58.847 ] 00:07:58.847 }' 00:07:58.847 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.848 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.106 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:59.106 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.106 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.106 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.106 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.106 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:59.106 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:59.106 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.106 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.366 [2024-11-28 02:23:32.796561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.366 BaseBdev1 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.366 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.367 [ 00:07:59.367 { 00:07:59.367 "name": "BaseBdev1", 00:07:59.367 "aliases": [ 00:07:59.367 "c78b3c04-2d91-412e-b292-f4feed7d7467" 00:07:59.367 ], 00:07:59.367 "product_name": "Malloc disk", 00:07:59.367 "block_size": 512, 00:07:59.367 "num_blocks": 65536, 00:07:59.367 "uuid": "c78b3c04-2d91-412e-b292-f4feed7d7467", 00:07:59.367 "assigned_rate_limits": { 00:07:59.367 "rw_ios_per_sec": 0, 00:07:59.367 "rw_mbytes_per_sec": 0, 00:07:59.367 "r_mbytes_per_sec": 0, 00:07:59.367 "w_mbytes_per_sec": 0 00:07:59.367 }, 00:07:59.367 "claimed": true, 00:07:59.367 "claim_type": "exclusive_write", 00:07:59.367 "zoned": false, 00:07:59.367 "supported_io_types": { 00:07:59.367 "read": true, 00:07:59.367 "write": true, 00:07:59.367 "unmap": true, 00:07:59.367 "flush": true, 00:07:59.367 "reset": true, 00:07:59.367 "nvme_admin": false, 00:07:59.367 "nvme_io": false, 00:07:59.367 "nvme_io_md": false, 00:07:59.367 "write_zeroes": true, 00:07:59.367 "zcopy": true, 00:07:59.367 "get_zone_info": false, 00:07:59.367 "zone_management": false, 00:07:59.367 "zone_append": false, 00:07:59.367 "compare": false, 00:07:59.367 "compare_and_write": false, 00:07:59.367 "abort": true, 00:07:59.367 "seek_hole": false, 00:07:59.367 "seek_data": false, 00:07:59.367 "copy": true, 00:07:59.367 "nvme_iov_md": false 00:07:59.367 }, 00:07:59.367 "memory_domains": [ 00:07:59.367 { 00:07:59.367 "dma_device_id": "system", 00:07:59.367 "dma_device_type": 1 00:07:59.367 }, 00:07:59.367 { 00:07:59.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.367 "dma_device_type": 2 00:07:59.367 } 00:07:59.367 ], 00:07:59.367 "driver_specific": {} 00:07:59.367 } 00:07:59.367 ] 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.367 "name": "Existed_Raid", 00:07:59.367 "uuid": "bade4b0e-07b5-43de-907c-5cb969f2b6ba", 00:07:59.367 "strip_size_kb": 64, 00:07:59.367 "state": "configuring", 00:07:59.367 "raid_level": "raid0", 00:07:59.367 "superblock": true, 00:07:59.367 "num_base_bdevs": 3, 00:07:59.367 "num_base_bdevs_discovered": 2, 00:07:59.367 "num_base_bdevs_operational": 3, 00:07:59.367 "base_bdevs_list": [ 00:07:59.367 { 00:07:59.367 "name": "BaseBdev1", 00:07:59.367 "uuid": "c78b3c04-2d91-412e-b292-f4feed7d7467", 00:07:59.367 "is_configured": true, 00:07:59.367 "data_offset": 2048, 00:07:59.367 "data_size": 63488 00:07:59.367 }, 00:07:59.367 { 00:07:59.367 "name": null, 00:07:59.367 "uuid": "5891b0cb-f81e-4c4b-b931-968839e46ed3", 00:07:59.367 "is_configured": false, 00:07:59.367 "data_offset": 0, 00:07:59.367 "data_size": 63488 00:07:59.367 }, 00:07:59.367 { 00:07:59.367 "name": "BaseBdev3", 00:07:59.367 "uuid": "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7", 00:07:59.367 "is_configured": true, 00:07:59.367 "data_offset": 2048, 00:07:59.367 "data_size": 63488 00:07:59.367 } 00:07:59.367 ] 00:07:59.367 }' 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.367 02:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.627 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:59.627 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.627 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.627 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.627 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.627 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:59.627 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:59.627 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.627 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.628 [2024-11-28 02:23:33.287742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.628 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.887 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.887 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.887 "name": "Existed_Raid", 00:07:59.887 "uuid": "bade4b0e-07b5-43de-907c-5cb969f2b6ba", 00:07:59.887 "strip_size_kb": 64, 00:07:59.887 "state": "configuring", 00:07:59.887 "raid_level": "raid0", 00:07:59.887 "superblock": true, 00:07:59.887 "num_base_bdevs": 3, 00:07:59.887 "num_base_bdevs_discovered": 1, 00:07:59.887 "num_base_bdevs_operational": 3, 00:07:59.887 "base_bdevs_list": [ 00:07:59.887 { 00:07:59.887 "name": "BaseBdev1", 00:07:59.887 "uuid": "c78b3c04-2d91-412e-b292-f4feed7d7467", 00:07:59.887 "is_configured": true, 00:07:59.887 "data_offset": 2048, 00:07:59.887 "data_size": 63488 00:07:59.887 }, 00:07:59.887 { 00:07:59.887 "name": null, 00:07:59.887 "uuid": "5891b0cb-f81e-4c4b-b931-968839e46ed3", 00:07:59.887 "is_configured": false, 00:07:59.887 "data_offset": 0, 00:07:59.887 "data_size": 63488 00:07:59.887 }, 00:07:59.887 { 00:07:59.887 "name": null, 00:07:59.887 "uuid": "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7", 00:07:59.887 "is_configured": false, 00:07:59.887 "data_offset": 0, 00:07:59.887 "data_size": 63488 00:07:59.887 } 00:07:59.887 ] 00:07:59.887 }' 00:07:59.887 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.887 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.148 [2024-11-28 02:23:33.679132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.148 "name": "Existed_Raid", 00:08:00.148 "uuid": "bade4b0e-07b5-43de-907c-5cb969f2b6ba", 00:08:00.148 "strip_size_kb": 64, 00:08:00.148 "state": "configuring", 00:08:00.148 "raid_level": "raid0", 00:08:00.148 "superblock": true, 00:08:00.148 "num_base_bdevs": 3, 00:08:00.148 "num_base_bdevs_discovered": 2, 00:08:00.148 "num_base_bdevs_operational": 3, 00:08:00.148 "base_bdevs_list": [ 00:08:00.148 { 00:08:00.148 "name": "BaseBdev1", 00:08:00.148 "uuid": "c78b3c04-2d91-412e-b292-f4feed7d7467", 00:08:00.148 "is_configured": true, 00:08:00.148 "data_offset": 2048, 00:08:00.148 "data_size": 63488 00:08:00.148 }, 00:08:00.148 { 00:08:00.148 "name": null, 00:08:00.148 "uuid": "5891b0cb-f81e-4c4b-b931-968839e46ed3", 00:08:00.148 "is_configured": false, 00:08:00.148 "data_offset": 0, 00:08:00.148 "data_size": 63488 00:08:00.148 }, 00:08:00.148 { 00:08:00.148 "name": "BaseBdev3", 00:08:00.148 "uuid": "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7", 00:08:00.148 "is_configured": true, 00:08:00.148 "data_offset": 2048, 00:08:00.148 "data_size": 63488 00:08:00.148 } 00:08:00.148 ] 00:08:00.148 }' 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.148 02:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.718 [2024-11-28 02:23:34.170295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.718 "name": "Existed_Raid", 00:08:00.718 "uuid": "bade4b0e-07b5-43de-907c-5cb969f2b6ba", 00:08:00.718 "strip_size_kb": 64, 00:08:00.718 "state": "configuring", 00:08:00.718 "raid_level": "raid0", 00:08:00.718 "superblock": true, 00:08:00.718 "num_base_bdevs": 3, 00:08:00.718 "num_base_bdevs_discovered": 1, 00:08:00.718 "num_base_bdevs_operational": 3, 00:08:00.718 "base_bdevs_list": [ 00:08:00.718 { 00:08:00.718 "name": null, 00:08:00.718 "uuid": "c78b3c04-2d91-412e-b292-f4feed7d7467", 00:08:00.718 "is_configured": false, 00:08:00.718 "data_offset": 0, 00:08:00.718 "data_size": 63488 00:08:00.718 }, 00:08:00.718 { 00:08:00.718 "name": null, 00:08:00.718 "uuid": "5891b0cb-f81e-4c4b-b931-968839e46ed3", 00:08:00.718 "is_configured": false, 00:08:00.718 "data_offset": 0, 00:08:00.718 "data_size": 63488 00:08:00.718 }, 00:08:00.718 { 00:08:00.718 "name": "BaseBdev3", 00:08:00.718 "uuid": "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7", 00:08:00.718 "is_configured": true, 00:08:00.718 "data_offset": 2048, 00:08:00.718 "data_size": 63488 00:08:00.718 } 00:08:00.718 ] 00:08:00.718 }' 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.718 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.287 [2024-11-28 02:23:34.725563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.287 "name": "Existed_Raid", 00:08:01.287 "uuid": "bade4b0e-07b5-43de-907c-5cb969f2b6ba", 00:08:01.287 "strip_size_kb": 64, 00:08:01.287 "state": "configuring", 00:08:01.287 "raid_level": "raid0", 00:08:01.287 "superblock": true, 00:08:01.287 "num_base_bdevs": 3, 00:08:01.287 "num_base_bdevs_discovered": 2, 00:08:01.287 "num_base_bdevs_operational": 3, 00:08:01.287 "base_bdevs_list": [ 00:08:01.287 { 00:08:01.287 "name": null, 00:08:01.287 "uuid": "c78b3c04-2d91-412e-b292-f4feed7d7467", 00:08:01.287 "is_configured": false, 00:08:01.287 "data_offset": 0, 00:08:01.287 "data_size": 63488 00:08:01.287 }, 00:08:01.287 { 00:08:01.287 "name": "BaseBdev2", 00:08:01.287 "uuid": "5891b0cb-f81e-4c4b-b931-968839e46ed3", 00:08:01.287 "is_configured": true, 00:08:01.287 "data_offset": 2048, 00:08:01.287 "data_size": 63488 00:08:01.287 }, 00:08:01.287 { 00:08:01.287 "name": "BaseBdev3", 00:08:01.287 "uuid": "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7", 00:08:01.287 "is_configured": true, 00:08:01.287 "data_offset": 2048, 00:08:01.287 "data_size": 63488 00:08:01.287 } 00:08:01.287 ] 00:08:01.287 }' 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.287 02:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c78b3c04-2d91-412e-b292-f4feed7d7467 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.547 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.806 NewBaseBdev 00:08:01.806 [2024-11-28 02:23:35.251151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:01.806 [2024-11-28 02:23:35.251376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:01.806 [2024-11-28 02:23:35.251393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:01.806 [2024-11-28 02:23:35.251633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:01.806 [2024-11-28 02:23:35.251772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:01.806 [2024-11-28 02:23:35.251781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:01.806 [2024-11-28 02:23:35.251915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.806 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.806 [ 00:08:01.806 { 00:08:01.806 "name": "NewBaseBdev", 00:08:01.806 "aliases": [ 00:08:01.806 "c78b3c04-2d91-412e-b292-f4feed7d7467" 00:08:01.806 ], 00:08:01.806 "product_name": "Malloc disk", 00:08:01.806 "block_size": 512, 00:08:01.806 "num_blocks": 65536, 00:08:01.806 "uuid": "c78b3c04-2d91-412e-b292-f4feed7d7467", 00:08:01.806 "assigned_rate_limits": { 00:08:01.806 "rw_ios_per_sec": 0, 00:08:01.806 "rw_mbytes_per_sec": 0, 00:08:01.806 "r_mbytes_per_sec": 0, 00:08:01.806 "w_mbytes_per_sec": 0 00:08:01.806 }, 00:08:01.806 "claimed": true, 00:08:01.806 "claim_type": "exclusive_write", 00:08:01.806 "zoned": false, 00:08:01.806 "supported_io_types": { 00:08:01.806 "read": true, 00:08:01.806 "write": true, 00:08:01.806 "unmap": true, 00:08:01.806 "flush": true, 00:08:01.806 "reset": true, 00:08:01.806 "nvme_admin": false, 00:08:01.806 "nvme_io": false, 00:08:01.806 "nvme_io_md": false, 00:08:01.806 "write_zeroes": true, 00:08:01.806 "zcopy": true, 00:08:01.806 "get_zone_info": false, 00:08:01.806 "zone_management": false, 00:08:01.806 "zone_append": false, 00:08:01.806 "compare": false, 00:08:01.807 "compare_and_write": false, 00:08:01.807 "abort": true, 00:08:01.807 "seek_hole": false, 00:08:01.807 "seek_data": false, 00:08:01.807 "copy": true, 00:08:01.807 "nvme_iov_md": false 00:08:01.807 }, 00:08:01.807 "memory_domains": [ 00:08:01.807 { 00:08:01.807 "dma_device_id": "system", 00:08:01.807 "dma_device_type": 1 00:08:01.807 }, 00:08:01.807 { 00:08:01.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.807 "dma_device_type": 2 00:08:01.807 } 00:08:01.807 ], 00:08:01.807 "driver_specific": {} 00:08:01.807 } 00:08:01.807 ] 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.807 "name": "Existed_Raid", 00:08:01.807 "uuid": "bade4b0e-07b5-43de-907c-5cb969f2b6ba", 00:08:01.807 "strip_size_kb": 64, 00:08:01.807 "state": "online", 00:08:01.807 "raid_level": "raid0", 00:08:01.807 "superblock": true, 00:08:01.807 "num_base_bdevs": 3, 00:08:01.807 "num_base_bdevs_discovered": 3, 00:08:01.807 "num_base_bdevs_operational": 3, 00:08:01.807 "base_bdevs_list": [ 00:08:01.807 { 00:08:01.807 "name": "NewBaseBdev", 00:08:01.807 "uuid": "c78b3c04-2d91-412e-b292-f4feed7d7467", 00:08:01.807 "is_configured": true, 00:08:01.807 "data_offset": 2048, 00:08:01.807 "data_size": 63488 00:08:01.807 }, 00:08:01.807 { 00:08:01.807 "name": "BaseBdev2", 00:08:01.807 "uuid": "5891b0cb-f81e-4c4b-b931-968839e46ed3", 00:08:01.807 "is_configured": true, 00:08:01.807 "data_offset": 2048, 00:08:01.807 "data_size": 63488 00:08:01.807 }, 00:08:01.807 { 00:08:01.807 "name": "BaseBdev3", 00:08:01.807 "uuid": "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7", 00:08:01.807 "is_configured": true, 00:08:01.807 "data_offset": 2048, 00:08:01.807 "data_size": 63488 00:08:01.807 } 00:08:01.807 ] 00:08:01.807 }' 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.807 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.067 [2024-11-28 02:23:35.666794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.067 "name": "Existed_Raid", 00:08:02.067 "aliases": [ 00:08:02.067 "bade4b0e-07b5-43de-907c-5cb969f2b6ba" 00:08:02.067 ], 00:08:02.067 "product_name": "Raid Volume", 00:08:02.067 "block_size": 512, 00:08:02.067 "num_blocks": 190464, 00:08:02.067 "uuid": "bade4b0e-07b5-43de-907c-5cb969f2b6ba", 00:08:02.067 "assigned_rate_limits": { 00:08:02.067 "rw_ios_per_sec": 0, 00:08:02.067 "rw_mbytes_per_sec": 0, 00:08:02.067 "r_mbytes_per_sec": 0, 00:08:02.067 "w_mbytes_per_sec": 0 00:08:02.067 }, 00:08:02.067 "claimed": false, 00:08:02.067 "zoned": false, 00:08:02.067 "supported_io_types": { 00:08:02.067 "read": true, 00:08:02.067 "write": true, 00:08:02.067 "unmap": true, 00:08:02.067 "flush": true, 00:08:02.067 "reset": true, 00:08:02.067 "nvme_admin": false, 00:08:02.067 "nvme_io": false, 00:08:02.067 "nvme_io_md": false, 00:08:02.067 "write_zeroes": true, 00:08:02.067 "zcopy": false, 00:08:02.067 "get_zone_info": false, 00:08:02.067 "zone_management": false, 00:08:02.067 "zone_append": false, 00:08:02.067 "compare": false, 00:08:02.067 "compare_and_write": false, 00:08:02.067 "abort": false, 00:08:02.067 "seek_hole": false, 00:08:02.067 "seek_data": false, 00:08:02.067 "copy": false, 00:08:02.067 "nvme_iov_md": false 00:08:02.067 }, 00:08:02.067 "memory_domains": [ 00:08:02.067 { 00:08:02.067 "dma_device_id": "system", 00:08:02.067 "dma_device_type": 1 00:08:02.067 }, 00:08:02.067 { 00:08:02.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.067 "dma_device_type": 2 00:08:02.067 }, 00:08:02.067 { 00:08:02.067 "dma_device_id": "system", 00:08:02.067 "dma_device_type": 1 00:08:02.067 }, 00:08:02.067 { 00:08:02.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.067 "dma_device_type": 2 00:08:02.067 }, 00:08:02.067 { 00:08:02.067 "dma_device_id": "system", 00:08:02.067 "dma_device_type": 1 00:08:02.067 }, 00:08:02.067 { 00:08:02.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.067 "dma_device_type": 2 00:08:02.067 } 00:08:02.067 ], 00:08:02.067 "driver_specific": { 00:08:02.067 "raid": { 00:08:02.067 "uuid": "bade4b0e-07b5-43de-907c-5cb969f2b6ba", 00:08:02.067 "strip_size_kb": 64, 00:08:02.067 "state": "online", 00:08:02.067 "raid_level": "raid0", 00:08:02.067 "superblock": true, 00:08:02.067 "num_base_bdevs": 3, 00:08:02.067 "num_base_bdevs_discovered": 3, 00:08:02.067 "num_base_bdevs_operational": 3, 00:08:02.067 "base_bdevs_list": [ 00:08:02.067 { 00:08:02.067 "name": "NewBaseBdev", 00:08:02.067 "uuid": "c78b3c04-2d91-412e-b292-f4feed7d7467", 00:08:02.067 "is_configured": true, 00:08:02.067 "data_offset": 2048, 00:08:02.067 "data_size": 63488 00:08:02.067 }, 00:08:02.067 { 00:08:02.067 "name": "BaseBdev2", 00:08:02.067 "uuid": "5891b0cb-f81e-4c4b-b931-968839e46ed3", 00:08:02.067 "is_configured": true, 00:08:02.067 "data_offset": 2048, 00:08:02.067 "data_size": 63488 00:08:02.067 }, 00:08:02.067 { 00:08:02.067 "name": "BaseBdev3", 00:08:02.067 "uuid": "0b6e5086-7b6a-4c45-9ad0-bd770be0efc7", 00:08:02.067 "is_configured": true, 00:08:02.067 "data_offset": 2048, 00:08:02.067 "data_size": 63488 00:08:02.067 } 00:08:02.067 ] 00:08:02.067 } 00:08:02.067 } 00:08:02.067 }' 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:02.067 BaseBdev2 00:08:02.067 BaseBdev3' 00:08:02.067 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.327 [2024-11-28 02:23:35.918055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.327 [2024-11-28 02:23:35.918124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.327 [2024-11-28 02:23:35.918211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.327 [2024-11-28 02:23:35.918268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.327 [2024-11-28 02:23:35.918281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64262 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64262 ']' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64262 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64262 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64262' 00:08:02.327 killing process with pid 64262 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64262 00:08:02.327 [2024-11-28 02:23:35.967146] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.327 02:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64262 00:08:02.586 [2024-11-28 02:23:36.250284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.967 02:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:03.967 00:08:03.967 real 0m10.144s 00:08:03.967 user 0m16.148s 00:08:03.967 sys 0m1.751s 00:08:03.967 02:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.967 ************************************ 00:08:03.967 END TEST raid_state_function_test_sb 00:08:03.967 ************************************ 00:08:03.967 02:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.967 02:23:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:03.967 02:23:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:03.967 02:23:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.967 02:23:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.967 ************************************ 00:08:03.967 START TEST raid_superblock_test 00:08:03.967 ************************************ 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64878 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64878 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64878 ']' 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.967 02:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.968 02:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.968 02:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.968 02:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.968 [2024-11-28 02:23:37.484450] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:03.968 [2024-11-28 02:23:37.484693] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64878 ] 00:08:04.227 [2024-11-28 02:23:37.656350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.227 [2024-11-28 02:23:37.768337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.486 [2024-11-28 02:23:37.960901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.486 [2024-11-28 02:23:37.961017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.745 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.745 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:04.745 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:04.745 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:04.745 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:04.745 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:04.745 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:04.745 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.746 malloc1 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.746 [2024-11-28 02:23:38.345109] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:04.746 [2024-11-28 02:23:38.345167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.746 [2024-11-28 02:23:38.345203] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:04.746 [2024-11-28 02:23:38.345212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.746 [2024-11-28 02:23:38.347214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.746 [2024-11-28 02:23:38.347251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:04.746 pt1 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.746 malloc2 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.746 [2024-11-28 02:23:38.398158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.746 [2024-11-28 02:23:38.398245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.746 [2024-11-28 02:23:38.398286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:04.746 [2024-11-28 02:23:38.398313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.746 [2024-11-28 02:23:38.400311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.746 [2024-11-28 02:23:38.400382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.746 pt2 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.746 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.006 malloc3 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.006 [2024-11-28 02:23:38.492717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:05.006 [2024-11-28 02:23:38.492820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.006 [2024-11-28 02:23:38.492857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:05.006 [2024-11-28 02:23:38.492884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.006 [2024-11-28 02:23:38.494925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.006 [2024-11-28 02:23:38.495017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:05.006 pt3 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.006 [2024-11-28 02:23:38.504746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:05.006 [2024-11-28 02:23:38.506514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:05.006 [2024-11-28 02:23:38.506628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:05.006 [2024-11-28 02:23:38.506796] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:05.006 [2024-11-28 02:23:38.506856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:05.006 [2024-11-28 02:23:38.507109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:05.006 [2024-11-28 02:23:38.507298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:05.006 [2024-11-28 02:23:38.507339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:05.006 [2024-11-28 02:23:38.507529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.006 "name": "raid_bdev1", 00:08:05.006 "uuid": "38b77243-f58b-4e7c-a69a-8c6a2295f292", 00:08:05.006 "strip_size_kb": 64, 00:08:05.006 "state": "online", 00:08:05.006 "raid_level": "raid0", 00:08:05.006 "superblock": true, 00:08:05.006 "num_base_bdevs": 3, 00:08:05.006 "num_base_bdevs_discovered": 3, 00:08:05.006 "num_base_bdevs_operational": 3, 00:08:05.006 "base_bdevs_list": [ 00:08:05.006 { 00:08:05.006 "name": "pt1", 00:08:05.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:05.006 "is_configured": true, 00:08:05.006 "data_offset": 2048, 00:08:05.006 "data_size": 63488 00:08:05.006 }, 00:08:05.006 { 00:08:05.006 "name": "pt2", 00:08:05.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.006 "is_configured": true, 00:08:05.006 "data_offset": 2048, 00:08:05.006 "data_size": 63488 00:08:05.006 }, 00:08:05.006 { 00:08:05.006 "name": "pt3", 00:08:05.006 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:05.006 "is_configured": true, 00:08:05.006 "data_offset": 2048, 00:08:05.006 "data_size": 63488 00:08:05.006 } 00:08:05.006 ] 00:08:05.006 }' 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.006 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.266 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:05.266 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:05.266 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.266 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.526 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.526 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.526 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.526 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.526 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.526 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.526 [2024-11-28 02:23:38.952291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.526 02:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.526 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.526 "name": "raid_bdev1", 00:08:05.526 "aliases": [ 00:08:05.526 "38b77243-f58b-4e7c-a69a-8c6a2295f292" 00:08:05.526 ], 00:08:05.526 "product_name": "Raid Volume", 00:08:05.526 "block_size": 512, 00:08:05.526 "num_blocks": 190464, 00:08:05.526 "uuid": "38b77243-f58b-4e7c-a69a-8c6a2295f292", 00:08:05.526 "assigned_rate_limits": { 00:08:05.526 "rw_ios_per_sec": 0, 00:08:05.526 "rw_mbytes_per_sec": 0, 00:08:05.526 "r_mbytes_per_sec": 0, 00:08:05.526 "w_mbytes_per_sec": 0 00:08:05.526 }, 00:08:05.526 "claimed": false, 00:08:05.526 "zoned": false, 00:08:05.526 "supported_io_types": { 00:08:05.526 "read": true, 00:08:05.526 "write": true, 00:08:05.526 "unmap": true, 00:08:05.526 "flush": true, 00:08:05.526 "reset": true, 00:08:05.526 "nvme_admin": false, 00:08:05.526 "nvme_io": false, 00:08:05.526 "nvme_io_md": false, 00:08:05.526 "write_zeroes": true, 00:08:05.526 "zcopy": false, 00:08:05.526 "get_zone_info": false, 00:08:05.526 "zone_management": false, 00:08:05.526 "zone_append": false, 00:08:05.526 "compare": false, 00:08:05.526 "compare_and_write": false, 00:08:05.526 "abort": false, 00:08:05.526 "seek_hole": false, 00:08:05.526 "seek_data": false, 00:08:05.526 "copy": false, 00:08:05.526 "nvme_iov_md": false 00:08:05.526 }, 00:08:05.526 "memory_domains": [ 00:08:05.526 { 00:08:05.526 "dma_device_id": "system", 00:08:05.526 "dma_device_type": 1 00:08:05.526 }, 00:08:05.526 { 00:08:05.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.526 "dma_device_type": 2 00:08:05.526 }, 00:08:05.526 { 00:08:05.526 "dma_device_id": "system", 00:08:05.526 "dma_device_type": 1 00:08:05.526 }, 00:08:05.526 { 00:08:05.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.526 "dma_device_type": 2 00:08:05.526 }, 00:08:05.526 { 00:08:05.526 "dma_device_id": "system", 00:08:05.526 "dma_device_type": 1 00:08:05.526 }, 00:08:05.526 { 00:08:05.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.526 "dma_device_type": 2 00:08:05.526 } 00:08:05.526 ], 00:08:05.526 "driver_specific": { 00:08:05.526 "raid": { 00:08:05.526 "uuid": "38b77243-f58b-4e7c-a69a-8c6a2295f292", 00:08:05.526 "strip_size_kb": 64, 00:08:05.526 "state": "online", 00:08:05.526 "raid_level": "raid0", 00:08:05.526 "superblock": true, 00:08:05.526 "num_base_bdevs": 3, 00:08:05.526 "num_base_bdevs_discovered": 3, 00:08:05.526 "num_base_bdevs_operational": 3, 00:08:05.526 "base_bdevs_list": [ 00:08:05.526 { 00:08:05.526 "name": "pt1", 00:08:05.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:05.526 "is_configured": true, 00:08:05.526 "data_offset": 2048, 00:08:05.526 "data_size": 63488 00:08:05.526 }, 00:08:05.526 { 00:08:05.526 "name": "pt2", 00:08:05.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.526 "is_configured": true, 00:08:05.526 "data_offset": 2048, 00:08:05.526 "data_size": 63488 00:08:05.526 }, 00:08:05.526 { 00:08:05.526 "name": "pt3", 00:08:05.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:05.526 "is_configured": true, 00:08:05.526 "data_offset": 2048, 00:08:05.526 "data_size": 63488 00:08:05.526 } 00:08:05.526 ] 00:08:05.526 } 00:08:05.526 } 00:08:05.526 }' 00:08:05.526 02:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.526 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:05.526 pt2 00:08:05.526 pt3' 00:08:05.526 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.526 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.526 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.526 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:05.526 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.526 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.526 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.527 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:05.787 [2024-11-28 02:23:39.203782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=38b77243-f58b-4e7c-a69a-8c6a2295f292 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 38b77243-f58b-4e7c-a69a-8c6a2295f292 ']' 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.787 [2024-11-28 02:23:39.251450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.787 [2024-11-28 02:23:39.251514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.787 [2024-11-28 02:23:39.251610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.787 [2024-11-28 02:23:39.251689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.787 [2024-11-28 02:23:39.251740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.787 [2024-11-28 02:23:39.379264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:05.787 [2024-11-28 02:23:39.381128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:05.787 [2024-11-28 02:23:39.381234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:05.787 [2024-11-28 02:23:39.381301] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:05.787 [2024-11-28 02:23:39.381382] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:05.787 [2024-11-28 02:23:39.381432] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:05.787 [2024-11-28 02:23:39.381485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.787 [2024-11-28 02:23:39.381524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:05.787 request: 00:08:05.787 { 00:08:05.787 "name": "raid_bdev1", 00:08:05.787 "raid_level": "raid0", 00:08:05.787 "base_bdevs": [ 00:08:05.787 "malloc1", 00:08:05.787 "malloc2", 00:08:05.787 "malloc3" 00:08:05.787 ], 00:08:05.787 "strip_size_kb": 64, 00:08:05.787 "superblock": false, 00:08:05.787 "method": "bdev_raid_create", 00:08:05.787 "req_id": 1 00:08:05.787 } 00:08:05.787 Got JSON-RPC error response 00:08:05.787 response: 00:08:05.787 { 00:08:05.787 "code": -17, 00:08:05.787 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:05.787 } 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.787 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.788 [2024-11-28 02:23:39.435128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:05.788 [2024-11-28 02:23:39.435224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.788 [2024-11-28 02:23:39.435256] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:05.788 [2024-11-28 02:23:39.435281] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.788 [2024-11-28 02:23:39.437302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.788 [2024-11-28 02:23:39.437366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:05.788 [2024-11-28 02:23:39.437468] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:05.788 [2024-11-28 02:23:39.437529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:05.788 pt1 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.788 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.047 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.047 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.047 "name": "raid_bdev1", 00:08:06.047 "uuid": "38b77243-f58b-4e7c-a69a-8c6a2295f292", 00:08:06.047 "strip_size_kb": 64, 00:08:06.047 "state": "configuring", 00:08:06.047 "raid_level": "raid0", 00:08:06.047 "superblock": true, 00:08:06.047 "num_base_bdevs": 3, 00:08:06.047 "num_base_bdevs_discovered": 1, 00:08:06.047 "num_base_bdevs_operational": 3, 00:08:06.047 "base_bdevs_list": [ 00:08:06.047 { 00:08:06.047 "name": "pt1", 00:08:06.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:06.047 "is_configured": true, 00:08:06.047 "data_offset": 2048, 00:08:06.047 "data_size": 63488 00:08:06.047 }, 00:08:06.047 { 00:08:06.047 "name": null, 00:08:06.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.047 "is_configured": false, 00:08:06.047 "data_offset": 2048, 00:08:06.048 "data_size": 63488 00:08:06.048 }, 00:08:06.048 { 00:08:06.048 "name": null, 00:08:06.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:06.048 "is_configured": false, 00:08:06.048 "data_offset": 2048, 00:08:06.048 "data_size": 63488 00:08:06.048 } 00:08:06.048 ] 00:08:06.048 }' 00:08:06.048 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.048 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.307 [2024-11-28 02:23:39.814478] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:06.307 [2024-11-28 02:23:39.814582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.307 [2024-11-28 02:23:39.814626] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:06.307 [2024-11-28 02:23:39.814655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.307 [2024-11-28 02:23:39.815086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.307 [2024-11-28 02:23:39.815140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:06.307 [2024-11-28 02:23:39.815242] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:06.307 [2024-11-28 02:23:39.815297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:06.307 pt2 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.307 [2024-11-28 02:23:39.822477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.307 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.307 "name": "raid_bdev1", 00:08:06.307 "uuid": "38b77243-f58b-4e7c-a69a-8c6a2295f292", 00:08:06.307 "strip_size_kb": 64, 00:08:06.307 "state": "configuring", 00:08:06.307 "raid_level": "raid0", 00:08:06.307 "superblock": true, 00:08:06.307 "num_base_bdevs": 3, 00:08:06.307 "num_base_bdevs_discovered": 1, 00:08:06.307 "num_base_bdevs_operational": 3, 00:08:06.307 "base_bdevs_list": [ 00:08:06.307 { 00:08:06.307 "name": "pt1", 00:08:06.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:06.307 "is_configured": true, 00:08:06.308 "data_offset": 2048, 00:08:06.308 "data_size": 63488 00:08:06.308 }, 00:08:06.308 { 00:08:06.308 "name": null, 00:08:06.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.308 "is_configured": false, 00:08:06.308 "data_offset": 0, 00:08:06.308 "data_size": 63488 00:08:06.308 }, 00:08:06.308 { 00:08:06.308 "name": null, 00:08:06.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:06.308 "is_configured": false, 00:08:06.308 "data_offset": 2048, 00:08:06.308 "data_size": 63488 00:08:06.308 } 00:08:06.308 ] 00:08:06.308 }' 00:08:06.308 02:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.308 02:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.567 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:06.567 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:06.567 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:06.567 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.827 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.827 [2024-11-28 02:23:40.249736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:06.827 [2024-11-28 02:23:40.249849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.827 [2024-11-28 02:23:40.249882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:06.827 [2024-11-28 02:23:40.249911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.827 [2024-11-28 02:23:40.250385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.827 [2024-11-28 02:23:40.250444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:06.827 [2024-11-28 02:23:40.250548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:06.827 [2024-11-28 02:23:40.250598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:06.827 pt2 00:08:06.827 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.827 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:06.827 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:06.827 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:06.827 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.827 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.827 [2024-11-28 02:23:40.261691] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:06.827 [2024-11-28 02:23:40.261789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.827 [2024-11-28 02:23:40.261817] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:06.827 [2024-11-28 02:23:40.261844] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.827 [2024-11-28 02:23:40.262222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.827 [2024-11-28 02:23:40.262281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:06.827 [2024-11-28 02:23:40.262363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:06.827 [2024-11-28 02:23:40.262408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:06.827 [2024-11-28 02:23:40.262534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:06.827 [2024-11-28 02:23:40.262572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:06.827 [2024-11-28 02:23:40.262815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:06.827 [2024-11-28 02:23:40.263006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:06.827 [2024-11-28 02:23:40.263044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:06.827 [2024-11-28 02:23:40.263204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.827 pt3 00:08:06.827 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.828 "name": "raid_bdev1", 00:08:06.828 "uuid": "38b77243-f58b-4e7c-a69a-8c6a2295f292", 00:08:06.828 "strip_size_kb": 64, 00:08:06.828 "state": "online", 00:08:06.828 "raid_level": "raid0", 00:08:06.828 "superblock": true, 00:08:06.828 "num_base_bdevs": 3, 00:08:06.828 "num_base_bdevs_discovered": 3, 00:08:06.828 "num_base_bdevs_operational": 3, 00:08:06.828 "base_bdevs_list": [ 00:08:06.828 { 00:08:06.828 "name": "pt1", 00:08:06.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:06.828 "is_configured": true, 00:08:06.828 "data_offset": 2048, 00:08:06.828 "data_size": 63488 00:08:06.828 }, 00:08:06.828 { 00:08:06.828 "name": "pt2", 00:08:06.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.828 "is_configured": true, 00:08:06.828 "data_offset": 2048, 00:08:06.828 "data_size": 63488 00:08:06.828 }, 00:08:06.828 { 00:08:06.828 "name": "pt3", 00:08:06.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:06.828 "is_configured": true, 00:08:06.828 "data_offset": 2048, 00:08:06.828 "data_size": 63488 00:08:06.828 } 00:08:06.828 ] 00:08:06.828 }' 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.828 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.088 [2024-11-28 02:23:40.689259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:07.088 "name": "raid_bdev1", 00:08:07.088 "aliases": [ 00:08:07.088 "38b77243-f58b-4e7c-a69a-8c6a2295f292" 00:08:07.088 ], 00:08:07.088 "product_name": "Raid Volume", 00:08:07.088 "block_size": 512, 00:08:07.088 "num_blocks": 190464, 00:08:07.088 "uuid": "38b77243-f58b-4e7c-a69a-8c6a2295f292", 00:08:07.088 "assigned_rate_limits": { 00:08:07.088 "rw_ios_per_sec": 0, 00:08:07.088 "rw_mbytes_per_sec": 0, 00:08:07.088 "r_mbytes_per_sec": 0, 00:08:07.088 "w_mbytes_per_sec": 0 00:08:07.088 }, 00:08:07.088 "claimed": false, 00:08:07.088 "zoned": false, 00:08:07.088 "supported_io_types": { 00:08:07.088 "read": true, 00:08:07.088 "write": true, 00:08:07.088 "unmap": true, 00:08:07.088 "flush": true, 00:08:07.088 "reset": true, 00:08:07.088 "nvme_admin": false, 00:08:07.088 "nvme_io": false, 00:08:07.088 "nvme_io_md": false, 00:08:07.088 "write_zeroes": true, 00:08:07.088 "zcopy": false, 00:08:07.088 "get_zone_info": false, 00:08:07.088 "zone_management": false, 00:08:07.088 "zone_append": false, 00:08:07.088 "compare": false, 00:08:07.088 "compare_and_write": false, 00:08:07.088 "abort": false, 00:08:07.088 "seek_hole": false, 00:08:07.088 "seek_data": false, 00:08:07.088 "copy": false, 00:08:07.088 "nvme_iov_md": false 00:08:07.088 }, 00:08:07.088 "memory_domains": [ 00:08:07.088 { 00:08:07.088 "dma_device_id": "system", 00:08:07.088 "dma_device_type": 1 00:08:07.088 }, 00:08:07.088 { 00:08:07.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.088 "dma_device_type": 2 00:08:07.088 }, 00:08:07.088 { 00:08:07.088 "dma_device_id": "system", 00:08:07.088 "dma_device_type": 1 00:08:07.088 }, 00:08:07.088 { 00:08:07.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.088 "dma_device_type": 2 00:08:07.088 }, 00:08:07.088 { 00:08:07.088 "dma_device_id": "system", 00:08:07.088 "dma_device_type": 1 00:08:07.088 }, 00:08:07.088 { 00:08:07.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.088 "dma_device_type": 2 00:08:07.088 } 00:08:07.088 ], 00:08:07.088 "driver_specific": { 00:08:07.088 "raid": { 00:08:07.088 "uuid": "38b77243-f58b-4e7c-a69a-8c6a2295f292", 00:08:07.088 "strip_size_kb": 64, 00:08:07.088 "state": "online", 00:08:07.088 "raid_level": "raid0", 00:08:07.088 "superblock": true, 00:08:07.088 "num_base_bdevs": 3, 00:08:07.088 "num_base_bdevs_discovered": 3, 00:08:07.088 "num_base_bdevs_operational": 3, 00:08:07.088 "base_bdevs_list": [ 00:08:07.088 { 00:08:07.088 "name": "pt1", 00:08:07.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.088 "is_configured": true, 00:08:07.088 "data_offset": 2048, 00:08:07.088 "data_size": 63488 00:08:07.088 }, 00:08:07.088 { 00:08:07.088 "name": "pt2", 00:08:07.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.088 "is_configured": true, 00:08:07.088 "data_offset": 2048, 00:08:07.088 "data_size": 63488 00:08:07.088 }, 00:08:07.088 { 00:08:07.088 "name": "pt3", 00:08:07.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:07.088 "is_configured": true, 00:08:07.088 "data_offset": 2048, 00:08:07.088 "data_size": 63488 00:08:07.088 } 00:08:07.088 ] 00:08:07.088 } 00:08:07.088 } 00:08:07.088 }' 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:07.088 pt2 00:08:07.088 pt3' 00:08:07.088 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.348 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.348 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.348 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:07.348 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.348 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.348 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.348 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.348 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.348 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.349 [2024-11-28 02:23:40.924780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 38b77243-f58b-4e7c-a69a-8c6a2295f292 '!=' 38b77243-f58b-4e7c-a69a-8c6a2295f292 ']' 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64878 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64878 ']' 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64878 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64878 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.349 killing process with pid 64878 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64878' 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64878 00:08:07.349 [2024-11-28 02:23:40.997314] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.349 [2024-11-28 02:23:40.997409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.349 [2024-11-28 02:23:40.997468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.349 [2024-11-28 02:23:40.997480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:07.349 02:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64878 00:08:07.918 [2024-11-28 02:23:41.290806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.951 02:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:08.951 00:08:08.951 real 0m4.965s 00:08:08.951 user 0m7.112s 00:08:08.951 sys 0m0.814s 00:08:08.951 02:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.951 02:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.951 ************************************ 00:08:08.951 END TEST raid_superblock_test 00:08:08.951 ************************************ 00:08:08.951 02:23:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:08.951 02:23:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.951 02:23:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.951 02:23:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.951 ************************************ 00:08:08.951 START TEST raid_read_error_test 00:08:08.951 ************************************ 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MuYq0FV2I9 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65131 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65131 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65131 ']' 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.951 02:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.951 [2024-11-28 02:23:42.530449] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:08.951 [2024-11-28 02:23:42.530573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65131 ] 00:08:09.211 [2024-11-28 02:23:42.705899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.211 [2024-11-28 02:23:42.811127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.471 [2024-11-28 02:23:43.002398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.471 [2024-11-28 02:23:43.002459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.732 BaseBdev1_malloc 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.732 true 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.732 [2024-11-28 02:23:43.382697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:09.732 [2024-11-28 02:23:43.382749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.732 [2024-11-28 02:23:43.382783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:09.732 [2024-11-28 02:23:43.382793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.732 [2024-11-28 02:23:43.384798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.732 [2024-11-28 02:23:43.384838] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:09.732 BaseBdev1 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.732 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 BaseBdev2_malloc 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 true 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 [2024-11-28 02:23:43.449309] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:09.993 [2024-11-28 02:23:43.449358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.993 [2024-11-28 02:23:43.449388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:09.993 [2024-11-28 02:23:43.449397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.993 [2024-11-28 02:23:43.451370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.993 [2024-11-28 02:23:43.451410] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:09.993 BaseBdev2 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 BaseBdev3_malloc 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 true 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 [2024-11-28 02:23:43.548375] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:09.993 [2024-11-28 02:23:43.548428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.993 [2024-11-28 02:23:43.548445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:09.993 [2024-11-28 02:23:43.548455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.993 [2024-11-28 02:23:43.550554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.993 [2024-11-28 02:23:43.550629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:09.993 BaseBdev3 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 [2024-11-28 02:23:43.560425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.993 [2024-11-28 02:23:43.562144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.993 [2024-11-28 02:23:43.562212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.993 [2024-11-28 02:23:43.562401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:09.993 [2024-11-28 02:23:43.562414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:09.993 [2024-11-28 02:23:43.562642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:09.993 [2024-11-28 02:23:43.562798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:09.993 [2024-11-28 02:23:43.562811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:09.993 [2024-11-28 02:23:43.562992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.993 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.994 "name": "raid_bdev1", 00:08:09.994 "uuid": "a6ed1843-0063-40a7-902e-a6d4143142f9", 00:08:09.994 "strip_size_kb": 64, 00:08:09.994 "state": "online", 00:08:09.994 "raid_level": "raid0", 00:08:09.994 "superblock": true, 00:08:09.994 "num_base_bdevs": 3, 00:08:09.994 "num_base_bdevs_discovered": 3, 00:08:09.994 "num_base_bdevs_operational": 3, 00:08:09.994 "base_bdevs_list": [ 00:08:09.994 { 00:08:09.994 "name": "BaseBdev1", 00:08:09.994 "uuid": "8b080bd8-8398-511c-b7d3-5dbb9a048951", 00:08:09.994 "is_configured": true, 00:08:09.994 "data_offset": 2048, 00:08:09.994 "data_size": 63488 00:08:09.994 }, 00:08:09.994 { 00:08:09.994 "name": "BaseBdev2", 00:08:09.994 "uuid": "83c1c72f-3780-53b5-ba35-dfe57fcea2e3", 00:08:09.994 "is_configured": true, 00:08:09.994 "data_offset": 2048, 00:08:09.994 "data_size": 63488 00:08:09.994 }, 00:08:09.994 { 00:08:09.994 "name": "BaseBdev3", 00:08:09.994 "uuid": "0219163e-e2f9-5da0-a12a-52bbc5fd9f0f", 00:08:09.994 "is_configured": true, 00:08:09.994 "data_offset": 2048, 00:08:09.994 "data_size": 63488 00:08:09.994 } 00:08:09.994 ] 00:08:09.994 }' 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.994 02:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.563 02:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:10.563 02:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:10.563 [2024-11-28 02:23:44.096720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.502 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.502 "name": "raid_bdev1", 00:08:11.502 "uuid": "a6ed1843-0063-40a7-902e-a6d4143142f9", 00:08:11.502 "strip_size_kb": 64, 00:08:11.502 "state": "online", 00:08:11.502 "raid_level": "raid0", 00:08:11.502 "superblock": true, 00:08:11.502 "num_base_bdevs": 3, 00:08:11.502 "num_base_bdevs_discovered": 3, 00:08:11.502 "num_base_bdevs_operational": 3, 00:08:11.502 "base_bdevs_list": [ 00:08:11.502 { 00:08:11.502 "name": "BaseBdev1", 00:08:11.502 "uuid": "8b080bd8-8398-511c-b7d3-5dbb9a048951", 00:08:11.502 "is_configured": true, 00:08:11.502 "data_offset": 2048, 00:08:11.502 "data_size": 63488 00:08:11.502 }, 00:08:11.502 { 00:08:11.502 "name": "BaseBdev2", 00:08:11.502 "uuid": "83c1c72f-3780-53b5-ba35-dfe57fcea2e3", 00:08:11.502 "is_configured": true, 00:08:11.503 "data_offset": 2048, 00:08:11.503 "data_size": 63488 00:08:11.503 }, 00:08:11.503 { 00:08:11.503 "name": "BaseBdev3", 00:08:11.503 "uuid": "0219163e-e2f9-5da0-a12a-52bbc5fd9f0f", 00:08:11.503 "is_configured": true, 00:08:11.503 "data_offset": 2048, 00:08:11.503 "data_size": 63488 00:08:11.503 } 00:08:11.503 ] 00:08:11.503 }' 00:08:11.503 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.503 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.070 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.070 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.070 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.070 [2024-11-28 02:23:45.536779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.070 [2024-11-28 02:23:45.536893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.070 [2024-11-28 02:23:45.539524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.070 [2024-11-28 02:23:45.539568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.070 [2024-11-28 02:23:45.539602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.070 [2024-11-28 02:23:45.539611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:12.070 { 00:08:12.070 "results": [ 00:08:12.070 { 00:08:12.070 "job": "raid_bdev1", 00:08:12.070 "core_mask": "0x1", 00:08:12.070 "workload": "randrw", 00:08:12.070 "percentage": 50, 00:08:12.070 "status": "finished", 00:08:12.070 "queue_depth": 1, 00:08:12.070 "io_size": 131072, 00:08:12.070 "runtime": 1.441284, 00:08:12.070 "iops": 16402.041512984255, 00:08:12.070 "mibps": 2050.255189123032, 00:08:12.070 "io_failed": 1, 00:08:12.070 "io_timeout": 0, 00:08:12.070 "avg_latency_us": 84.4584954456112, 00:08:12.070 "min_latency_us": 21.128384279475984, 00:08:12.070 "max_latency_us": 1337.907423580786 00:08:12.070 } 00:08:12.070 ], 00:08:12.070 "core_count": 1 00:08:12.070 } 00:08:12.070 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.070 02:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65131 00:08:12.070 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65131 ']' 00:08:12.070 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65131 00:08:12.070 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:12.071 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.071 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65131 00:08:12.071 killing process with pid 65131 00:08:12.071 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.071 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.071 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65131' 00:08:12.071 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65131 00:08:12.071 [2024-11-28 02:23:45.584490] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.071 02:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65131 00:08:12.330 [2024-11-28 02:23:45.802999] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.269 02:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MuYq0FV2I9 00:08:13.269 02:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:13.269 02:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:13.269 02:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:08:13.269 02:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:13.269 ************************************ 00:08:13.269 END TEST raid_read_error_test 00:08:13.269 ************************************ 00:08:13.269 02:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.269 02:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.269 02:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:08:13.269 00:08:13.269 real 0m4.509s 00:08:13.270 user 0m5.385s 00:08:13.270 sys 0m0.546s 00:08:13.270 02:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.270 02:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.529 02:23:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:13.529 02:23:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:13.529 02:23:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.529 02:23:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.529 ************************************ 00:08:13.529 START TEST raid_write_error_test 00:08:13.529 ************************************ 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xoRqBCHROM 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65271 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65271 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65271 ']' 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.529 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.529 [2024-11-28 02:23:47.112778] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:13.529 [2024-11-28 02:23:47.113255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65271 ] 00:08:13.789 [2024-11-28 02:23:47.271869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.789 [2024-11-28 02:23:47.382417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.049 [2024-11-28 02:23:47.571424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.049 [2024-11-28 02:23:47.571474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.309 BaseBdev1_malloc 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.309 true 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.309 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 [2024-11-28 02:23:47.989089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:14.569 [2024-11-28 02:23:47.989199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.569 [2024-11-28 02:23:47.989222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:14.569 [2024-11-28 02:23:47.989233] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.569 [2024-11-28 02:23:47.991229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.569 [2024-11-28 02:23:47.991269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:14.569 BaseBdev1 00:08:14.569 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.569 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.569 02:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:14.569 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.569 02:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 BaseBdev2_malloc 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 true 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 [2024-11-28 02:23:48.052210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:14.569 [2024-11-28 02:23:48.052260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.569 [2024-11-28 02:23:48.052275] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:14.569 [2024-11-28 02:23:48.052285] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.569 [2024-11-28 02:23:48.054262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.569 [2024-11-28 02:23:48.054297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:14.569 BaseBdev2 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 BaseBdev3_malloc 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 true 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 [2024-11-28 02:23:48.150302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:14.569 [2024-11-28 02:23:48.150404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.569 [2024-11-28 02:23:48.150423] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:14.569 [2024-11-28 02:23:48.150434] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.569 [2024-11-28 02:23:48.152414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.569 [2024-11-28 02:23:48.152452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:14.569 BaseBdev3 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 [2024-11-28 02:23:48.162348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.569 [2024-11-28 02:23:48.164076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.569 [2024-11-28 02:23:48.164143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:14.569 [2024-11-28 02:23:48.164328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:14.569 [2024-11-28 02:23:48.164341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:14.569 [2024-11-28 02:23:48.164568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:14.569 [2024-11-28 02:23:48.164720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:14.569 [2024-11-28 02:23:48.164733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:14.569 [2024-11-28 02:23:48.164873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.569 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.570 "name": "raid_bdev1", 00:08:14.570 "uuid": "53f87646-fad8-424a-84af-b6162e71f22c", 00:08:14.570 "strip_size_kb": 64, 00:08:14.570 "state": "online", 00:08:14.570 "raid_level": "raid0", 00:08:14.570 "superblock": true, 00:08:14.570 "num_base_bdevs": 3, 00:08:14.570 "num_base_bdevs_discovered": 3, 00:08:14.570 "num_base_bdevs_operational": 3, 00:08:14.570 "base_bdevs_list": [ 00:08:14.570 { 00:08:14.570 "name": "BaseBdev1", 00:08:14.570 "uuid": "2768904e-5b91-5daa-833e-9da74ae4f69c", 00:08:14.570 "is_configured": true, 00:08:14.570 "data_offset": 2048, 00:08:14.570 "data_size": 63488 00:08:14.570 }, 00:08:14.570 { 00:08:14.570 "name": "BaseBdev2", 00:08:14.570 "uuid": "5664f4d6-98a2-5c59-98f6-1278d5c7a752", 00:08:14.570 "is_configured": true, 00:08:14.570 "data_offset": 2048, 00:08:14.570 "data_size": 63488 00:08:14.570 }, 00:08:14.570 { 00:08:14.570 "name": "BaseBdev3", 00:08:14.570 "uuid": "8de692c9-c027-5661-b44a-ecbdd1fa67aa", 00:08:14.570 "is_configured": true, 00:08:14.570 "data_offset": 2048, 00:08:14.570 "data_size": 63488 00:08:14.570 } 00:08:14.570 ] 00:08:14.570 }' 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.570 02:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.139 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:15.139 02:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:15.139 [2024-11-28 02:23:48.690821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.077 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.078 02:23:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.078 02:23:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.078 02:23:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.078 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.078 "name": "raid_bdev1", 00:08:16.078 "uuid": "53f87646-fad8-424a-84af-b6162e71f22c", 00:08:16.078 "strip_size_kb": 64, 00:08:16.078 "state": "online", 00:08:16.078 "raid_level": "raid0", 00:08:16.078 "superblock": true, 00:08:16.078 "num_base_bdevs": 3, 00:08:16.078 "num_base_bdevs_discovered": 3, 00:08:16.078 "num_base_bdevs_operational": 3, 00:08:16.078 "base_bdevs_list": [ 00:08:16.078 { 00:08:16.078 "name": "BaseBdev1", 00:08:16.078 "uuid": "2768904e-5b91-5daa-833e-9da74ae4f69c", 00:08:16.078 "is_configured": true, 00:08:16.078 "data_offset": 2048, 00:08:16.078 "data_size": 63488 00:08:16.078 }, 00:08:16.078 { 00:08:16.078 "name": "BaseBdev2", 00:08:16.078 "uuid": "5664f4d6-98a2-5c59-98f6-1278d5c7a752", 00:08:16.078 "is_configured": true, 00:08:16.078 "data_offset": 2048, 00:08:16.078 "data_size": 63488 00:08:16.078 }, 00:08:16.078 { 00:08:16.078 "name": "BaseBdev3", 00:08:16.078 "uuid": "8de692c9-c027-5661-b44a-ecbdd1fa67aa", 00:08:16.078 "is_configured": true, 00:08:16.078 "data_offset": 2048, 00:08:16.078 "data_size": 63488 00:08:16.078 } 00:08:16.078 ] 00:08:16.078 }' 00:08:16.078 02:23:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.078 02:23:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.646 02:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.647 [2024-11-28 02:23:50.054624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.647 [2024-11-28 02:23:50.054727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.647 [2024-11-28 02:23:50.057381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.647 [2024-11-28 02:23:50.057465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.647 [2024-11-28 02:23:50.057522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.647 [2024-11-28 02:23:50.057561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:16.647 { 00:08:16.647 "results": [ 00:08:16.647 { 00:08:16.647 "job": "raid_bdev1", 00:08:16.647 "core_mask": "0x1", 00:08:16.647 "workload": "randrw", 00:08:16.647 "percentage": 50, 00:08:16.647 "status": "finished", 00:08:16.647 "queue_depth": 1, 00:08:16.647 "io_size": 131072, 00:08:16.647 "runtime": 1.364848, 00:08:16.647 "iops": 16372.519137662215, 00:08:16.647 "mibps": 2046.564892207777, 00:08:16.647 "io_failed": 1, 00:08:16.647 "io_timeout": 0, 00:08:16.647 "avg_latency_us": 84.62859663079148, 00:08:16.647 "min_latency_us": 22.134497816593885, 00:08:16.647 "max_latency_us": 1373.6803493449781 00:08:16.647 } 00:08:16.647 ], 00:08:16.647 "core_count": 1 00:08:16.647 } 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65271 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65271 ']' 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65271 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65271 00:08:16.647 killing process with pid 65271 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65271' 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65271 00:08:16.647 [2024-11-28 02:23:50.107177] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.647 02:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65271 00:08:16.647 [2024-11-28 02:23:50.321426] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.030 02:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:18.030 02:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xoRqBCHROM 00:08:18.030 02:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:18.030 02:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:18.030 02:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:18.030 02:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.030 02:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:18.030 02:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:18.030 00:08:18.030 real 0m4.439s 00:08:18.030 user 0m5.250s 00:08:18.030 sys 0m0.550s 00:08:18.030 ************************************ 00:08:18.030 END TEST raid_write_error_test 00:08:18.030 ************************************ 00:08:18.030 02:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.030 02:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.030 02:23:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:18.030 02:23:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:18.030 02:23:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:18.030 02:23:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.030 02:23:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.030 ************************************ 00:08:18.030 START TEST raid_state_function_test 00:08:18.030 ************************************ 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65415 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65415' 00:08:18.030 Process raid pid: 65415 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65415 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65415 ']' 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.030 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.030 [2024-11-28 02:23:51.617568] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:18.030 [2024-11-28 02:23:51.617777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.290 [2024-11-28 02:23:51.774387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.290 [2024-11-28 02:23:51.881481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.551 [2024-11-28 02:23:52.078487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.551 [2024-11-28 02:23:52.078598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.811 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.811 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.811 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:18.811 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.811 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.811 [2024-11-28 02:23:52.444996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.811 [2024-11-28 02:23:52.445099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.811 [2024-11-28 02:23:52.445112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.811 [2024-11-28 02:23:52.445122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.812 [2024-11-28 02:23:52.445129] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:18.812 [2024-11-28 02:23:52.445137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.812 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.150 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.150 "name": "Existed_Raid", 00:08:19.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.150 "strip_size_kb": 64, 00:08:19.150 "state": "configuring", 00:08:19.150 "raid_level": "concat", 00:08:19.150 "superblock": false, 00:08:19.150 "num_base_bdevs": 3, 00:08:19.150 "num_base_bdevs_discovered": 0, 00:08:19.150 "num_base_bdevs_operational": 3, 00:08:19.150 "base_bdevs_list": [ 00:08:19.150 { 00:08:19.150 "name": "BaseBdev1", 00:08:19.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.150 "is_configured": false, 00:08:19.150 "data_offset": 0, 00:08:19.150 "data_size": 0 00:08:19.150 }, 00:08:19.150 { 00:08:19.150 "name": "BaseBdev2", 00:08:19.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.150 "is_configured": false, 00:08:19.150 "data_offset": 0, 00:08:19.150 "data_size": 0 00:08:19.150 }, 00:08:19.150 { 00:08:19.150 "name": "BaseBdev3", 00:08:19.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.150 "is_configured": false, 00:08:19.150 "data_offset": 0, 00:08:19.150 "data_size": 0 00:08:19.150 } 00:08:19.150 ] 00:08:19.150 }' 00:08:19.150 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.150 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.411 [2024-11-28 02:23:52.880179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.411 [2024-11-28 02:23:52.880257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.411 [2024-11-28 02:23:52.888186] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.411 [2024-11-28 02:23:52.888278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.411 [2024-11-28 02:23:52.888322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.411 [2024-11-28 02:23:52.888352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.411 [2024-11-28 02:23:52.888409] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.411 [2024-11-28 02:23:52.888436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.411 [2024-11-28 02:23:52.933054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.411 BaseBdev1 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.411 [ 00:08:19.411 { 00:08:19.411 "name": "BaseBdev1", 00:08:19.411 "aliases": [ 00:08:19.411 "2fb61fb0-f0e1-4118-85b7-2a3a4598ced0" 00:08:19.411 ], 00:08:19.411 "product_name": "Malloc disk", 00:08:19.411 "block_size": 512, 00:08:19.411 "num_blocks": 65536, 00:08:19.411 "uuid": "2fb61fb0-f0e1-4118-85b7-2a3a4598ced0", 00:08:19.411 "assigned_rate_limits": { 00:08:19.411 "rw_ios_per_sec": 0, 00:08:19.411 "rw_mbytes_per_sec": 0, 00:08:19.411 "r_mbytes_per_sec": 0, 00:08:19.411 "w_mbytes_per_sec": 0 00:08:19.411 }, 00:08:19.411 "claimed": true, 00:08:19.411 "claim_type": "exclusive_write", 00:08:19.411 "zoned": false, 00:08:19.411 "supported_io_types": { 00:08:19.411 "read": true, 00:08:19.411 "write": true, 00:08:19.411 "unmap": true, 00:08:19.411 "flush": true, 00:08:19.411 "reset": true, 00:08:19.411 "nvme_admin": false, 00:08:19.411 "nvme_io": false, 00:08:19.411 "nvme_io_md": false, 00:08:19.411 "write_zeroes": true, 00:08:19.411 "zcopy": true, 00:08:19.411 "get_zone_info": false, 00:08:19.411 "zone_management": false, 00:08:19.411 "zone_append": false, 00:08:19.411 "compare": false, 00:08:19.411 "compare_and_write": false, 00:08:19.411 "abort": true, 00:08:19.411 "seek_hole": false, 00:08:19.411 "seek_data": false, 00:08:19.411 "copy": true, 00:08:19.411 "nvme_iov_md": false 00:08:19.411 }, 00:08:19.411 "memory_domains": [ 00:08:19.411 { 00:08:19.411 "dma_device_id": "system", 00:08:19.411 "dma_device_type": 1 00:08:19.411 }, 00:08:19.411 { 00:08:19.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.411 "dma_device_type": 2 00:08:19.411 } 00:08:19.411 ], 00:08:19.411 "driver_specific": {} 00:08:19.411 } 00:08:19.411 ] 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.411 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.412 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.412 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.412 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.412 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.412 "name": "Existed_Raid", 00:08:19.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.412 "strip_size_kb": 64, 00:08:19.412 "state": "configuring", 00:08:19.412 "raid_level": "concat", 00:08:19.412 "superblock": false, 00:08:19.412 "num_base_bdevs": 3, 00:08:19.412 "num_base_bdevs_discovered": 1, 00:08:19.412 "num_base_bdevs_operational": 3, 00:08:19.412 "base_bdevs_list": [ 00:08:19.412 { 00:08:19.412 "name": "BaseBdev1", 00:08:19.412 "uuid": "2fb61fb0-f0e1-4118-85b7-2a3a4598ced0", 00:08:19.412 "is_configured": true, 00:08:19.412 "data_offset": 0, 00:08:19.412 "data_size": 65536 00:08:19.412 }, 00:08:19.412 { 00:08:19.412 "name": "BaseBdev2", 00:08:19.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.412 "is_configured": false, 00:08:19.412 "data_offset": 0, 00:08:19.412 "data_size": 0 00:08:19.412 }, 00:08:19.412 { 00:08:19.412 "name": "BaseBdev3", 00:08:19.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.412 "is_configured": false, 00:08:19.412 "data_offset": 0, 00:08:19.412 "data_size": 0 00:08:19.412 } 00:08:19.412 ] 00:08:19.412 }' 00:08:19.412 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.412 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.980 [2024-11-28 02:23:53.444214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.980 [2024-11-28 02:23:53.444310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.980 [2024-11-28 02:23:53.452253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.980 [2024-11-28 02:23:53.454122] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.980 [2024-11-28 02:23:53.454163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.980 [2024-11-28 02:23:53.454174] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.980 [2024-11-28 02:23:53.454183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.980 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.980 "name": "Existed_Raid", 00:08:19.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.980 "strip_size_kb": 64, 00:08:19.980 "state": "configuring", 00:08:19.980 "raid_level": "concat", 00:08:19.981 "superblock": false, 00:08:19.981 "num_base_bdevs": 3, 00:08:19.981 "num_base_bdevs_discovered": 1, 00:08:19.981 "num_base_bdevs_operational": 3, 00:08:19.981 "base_bdevs_list": [ 00:08:19.981 { 00:08:19.981 "name": "BaseBdev1", 00:08:19.981 "uuid": "2fb61fb0-f0e1-4118-85b7-2a3a4598ced0", 00:08:19.981 "is_configured": true, 00:08:19.981 "data_offset": 0, 00:08:19.981 "data_size": 65536 00:08:19.981 }, 00:08:19.981 { 00:08:19.981 "name": "BaseBdev2", 00:08:19.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.981 "is_configured": false, 00:08:19.981 "data_offset": 0, 00:08:19.981 "data_size": 0 00:08:19.981 }, 00:08:19.981 { 00:08:19.981 "name": "BaseBdev3", 00:08:19.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.981 "is_configured": false, 00:08:19.981 "data_offset": 0, 00:08:19.981 "data_size": 0 00:08:19.981 } 00:08:19.981 ] 00:08:19.981 }' 00:08:19.981 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.981 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.240 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:20.240 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.240 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.240 [2024-11-28 02:23:53.914854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.240 BaseBdev2 00:08:20.240 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.240 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:20.240 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:20.240 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.500 [ 00:08:20.500 { 00:08:20.500 "name": "BaseBdev2", 00:08:20.500 "aliases": [ 00:08:20.500 "0e3a42aa-7697-4e55-8f5e-f0cd18b5d6c1" 00:08:20.500 ], 00:08:20.500 "product_name": "Malloc disk", 00:08:20.500 "block_size": 512, 00:08:20.500 "num_blocks": 65536, 00:08:20.500 "uuid": "0e3a42aa-7697-4e55-8f5e-f0cd18b5d6c1", 00:08:20.500 "assigned_rate_limits": { 00:08:20.500 "rw_ios_per_sec": 0, 00:08:20.500 "rw_mbytes_per_sec": 0, 00:08:20.500 "r_mbytes_per_sec": 0, 00:08:20.500 "w_mbytes_per_sec": 0 00:08:20.500 }, 00:08:20.500 "claimed": true, 00:08:20.500 "claim_type": "exclusive_write", 00:08:20.500 "zoned": false, 00:08:20.500 "supported_io_types": { 00:08:20.500 "read": true, 00:08:20.500 "write": true, 00:08:20.500 "unmap": true, 00:08:20.500 "flush": true, 00:08:20.500 "reset": true, 00:08:20.500 "nvme_admin": false, 00:08:20.500 "nvme_io": false, 00:08:20.500 "nvme_io_md": false, 00:08:20.500 "write_zeroes": true, 00:08:20.500 "zcopy": true, 00:08:20.500 "get_zone_info": false, 00:08:20.500 "zone_management": false, 00:08:20.500 "zone_append": false, 00:08:20.500 "compare": false, 00:08:20.500 "compare_and_write": false, 00:08:20.500 "abort": true, 00:08:20.500 "seek_hole": false, 00:08:20.500 "seek_data": false, 00:08:20.500 "copy": true, 00:08:20.500 "nvme_iov_md": false 00:08:20.500 }, 00:08:20.500 "memory_domains": [ 00:08:20.500 { 00:08:20.500 "dma_device_id": "system", 00:08:20.500 "dma_device_type": 1 00:08:20.500 }, 00:08:20.500 { 00:08:20.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.500 "dma_device_type": 2 00:08:20.500 } 00:08:20.500 ], 00:08:20.500 "driver_specific": {} 00:08:20.500 } 00:08:20.500 ] 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.500 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.500 "name": "Existed_Raid", 00:08:20.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.500 "strip_size_kb": 64, 00:08:20.500 "state": "configuring", 00:08:20.500 "raid_level": "concat", 00:08:20.500 "superblock": false, 00:08:20.500 "num_base_bdevs": 3, 00:08:20.500 "num_base_bdevs_discovered": 2, 00:08:20.500 "num_base_bdevs_operational": 3, 00:08:20.500 "base_bdevs_list": [ 00:08:20.500 { 00:08:20.500 "name": "BaseBdev1", 00:08:20.500 "uuid": "2fb61fb0-f0e1-4118-85b7-2a3a4598ced0", 00:08:20.500 "is_configured": true, 00:08:20.500 "data_offset": 0, 00:08:20.500 "data_size": 65536 00:08:20.500 }, 00:08:20.500 { 00:08:20.500 "name": "BaseBdev2", 00:08:20.500 "uuid": "0e3a42aa-7697-4e55-8f5e-f0cd18b5d6c1", 00:08:20.500 "is_configured": true, 00:08:20.500 "data_offset": 0, 00:08:20.500 "data_size": 65536 00:08:20.500 }, 00:08:20.500 { 00:08:20.500 "name": "BaseBdev3", 00:08:20.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.500 "is_configured": false, 00:08:20.500 "data_offset": 0, 00:08:20.500 "data_size": 0 00:08:20.500 } 00:08:20.500 ] 00:08:20.500 }' 00:08:20.501 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.501 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.760 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:20.760 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.760 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.760 [2024-11-28 02:23:54.409745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:20.761 [2024-11-28 02:23:54.409793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:20.761 [2024-11-28 02:23:54.409805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:20.761 [2024-11-28 02:23:54.410106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:20.761 [2024-11-28 02:23:54.410284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:20.761 [2024-11-28 02:23:54.410295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:20.761 [2024-11-28 02:23:54.410559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.761 BaseBdev3 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.761 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.761 [ 00:08:20.761 { 00:08:20.761 "name": "BaseBdev3", 00:08:20.761 "aliases": [ 00:08:20.761 "5bda0749-c1f3-4a28-991b-9695dec98ac7" 00:08:20.761 ], 00:08:20.761 "product_name": "Malloc disk", 00:08:20.761 "block_size": 512, 00:08:20.761 "num_blocks": 65536, 00:08:20.761 "uuid": "5bda0749-c1f3-4a28-991b-9695dec98ac7", 00:08:20.761 "assigned_rate_limits": { 00:08:20.761 "rw_ios_per_sec": 0, 00:08:21.021 "rw_mbytes_per_sec": 0, 00:08:21.021 "r_mbytes_per_sec": 0, 00:08:21.021 "w_mbytes_per_sec": 0 00:08:21.021 }, 00:08:21.021 "claimed": true, 00:08:21.021 "claim_type": "exclusive_write", 00:08:21.021 "zoned": false, 00:08:21.021 "supported_io_types": { 00:08:21.021 "read": true, 00:08:21.021 "write": true, 00:08:21.021 "unmap": true, 00:08:21.021 "flush": true, 00:08:21.021 "reset": true, 00:08:21.021 "nvme_admin": false, 00:08:21.021 "nvme_io": false, 00:08:21.021 "nvme_io_md": false, 00:08:21.021 "write_zeroes": true, 00:08:21.021 "zcopy": true, 00:08:21.021 "get_zone_info": false, 00:08:21.021 "zone_management": false, 00:08:21.021 "zone_append": false, 00:08:21.021 "compare": false, 00:08:21.021 "compare_and_write": false, 00:08:21.021 "abort": true, 00:08:21.021 "seek_hole": false, 00:08:21.021 "seek_data": false, 00:08:21.021 "copy": true, 00:08:21.021 "nvme_iov_md": false 00:08:21.021 }, 00:08:21.021 "memory_domains": [ 00:08:21.021 { 00:08:21.021 "dma_device_id": "system", 00:08:21.021 "dma_device_type": 1 00:08:21.021 }, 00:08:21.021 { 00:08:21.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.021 "dma_device_type": 2 00:08:21.021 } 00:08:21.021 ], 00:08:21.021 "driver_specific": {} 00:08:21.021 } 00:08:21.021 ] 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.021 "name": "Existed_Raid", 00:08:21.021 "uuid": "d7e8d3c0-3931-4431-833a-9d2df64a536b", 00:08:21.021 "strip_size_kb": 64, 00:08:21.021 "state": "online", 00:08:21.021 "raid_level": "concat", 00:08:21.021 "superblock": false, 00:08:21.021 "num_base_bdevs": 3, 00:08:21.021 "num_base_bdevs_discovered": 3, 00:08:21.021 "num_base_bdevs_operational": 3, 00:08:21.021 "base_bdevs_list": [ 00:08:21.021 { 00:08:21.021 "name": "BaseBdev1", 00:08:21.021 "uuid": "2fb61fb0-f0e1-4118-85b7-2a3a4598ced0", 00:08:21.021 "is_configured": true, 00:08:21.021 "data_offset": 0, 00:08:21.021 "data_size": 65536 00:08:21.021 }, 00:08:21.021 { 00:08:21.021 "name": "BaseBdev2", 00:08:21.021 "uuid": "0e3a42aa-7697-4e55-8f5e-f0cd18b5d6c1", 00:08:21.021 "is_configured": true, 00:08:21.021 "data_offset": 0, 00:08:21.021 "data_size": 65536 00:08:21.021 }, 00:08:21.021 { 00:08:21.021 "name": "BaseBdev3", 00:08:21.021 "uuid": "5bda0749-c1f3-4a28-991b-9695dec98ac7", 00:08:21.021 "is_configured": true, 00:08:21.021 "data_offset": 0, 00:08:21.021 "data_size": 65536 00:08:21.021 } 00:08:21.021 ] 00:08:21.021 }' 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.021 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.281 [2024-11-28 02:23:54.849329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.281 "name": "Existed_Raid", 00:08:21.281 "aliases": [ 00:08:21.281 "d7e8d3c0-3931-4431-833a-9d2df64a536b" 00:08:21.281 ], 00:08:21.281 "product_name": "Raid Volume", 00:08:21.281 "block_size": 512, 00:08:21.281 "num_blocks": 196608, 00:08:21.281 "uuid": "d7e8d3c0-3931-4431-833a-9d2df64a536b", 00:08:21.281 "assigned_rate_limits": { 00:08:21.281 "rw_ios_per_sec": 0, 00:08:21.281 "rw_mbytes_per_sec": 0, 00:08:21.281 "r_mbytes_per_sec": 0, 00:08:21.281 "w_mbytes_per_sec": 0 00:08:21.281 }, 00:08:21.281 "claimed": false, 00:08:21.281 "zoned": false, 00:08:21.281 "supported_io_types": { 00:08:21.281 "read": true, 00:08:21.281 "write": true, 00:08:21.281 "unmap": true, 00:08:21.281 "flush": true, 00:08:21.281 "reset": true, 00:08:21.281 "nvme_admin": false, 00:08:21.281 "nvme_io": false, 00:08:21.281 "nvme_io_md": false, 00:08:21.281 "write_zeroes": true, 00:08:21.281 "zcopy": false, 00:08:21.281 "get_zone_info": false, 00:08:21.281 "zone_management": false, 00:08:21.281 "zone_append": false, 00:08:21.281 "compare": false, 00:08:21.281 "compare_and_write": false, 00:08:21.281 "abort": false, 00:08:21.281 "seek_hole": false, 00:08:21.281 "seek_data": false, 00:08:21.281 "copy": false, 00:08:21.281 "nvme_iov_md": false 00:08:21.281 }, 00:08:21.281 "memory_domains": [ 00:08:21.281 { 00:08:21.281 "dma_device_id": "system", 00:08:21.281 "dma_device_type": 1 00:08:21.281 }, 00:08:21.281 { 00:08:21.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.281 "dma_device_type": 2 00:08:21.281 }, 00:08:21.281 { 00:08:21.281 "dma_device_id": "system", 00:08:21.281 "dma_device_type": 1 00:08:21.281 }, 00:08:21.281 { 00:08:21.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.281 "dma_device_type": 2 00:08:21.281 }, 00:08:21.281 { 00:08:21.281 "dma_device_id": "system", 00:08:21.281 "dma_device_type": 1 00:08:21.281 }, 00:08:21.281 { 00:08:21.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.281 "dma_device_type": 2 00:08:21.281 } 00:08:21.281 ], 00:08:21.281 "driver_specific": { 00:08:21.281 "raid": { 00:08:21.281 "uuid": "d7e8d3c0-3931-4431-833a-9d2df64a536b", 00:08:21.281 "strip_size_kb": 64, 00:08:21.281 "state": "online", 00:08:21.281 "raid_level": "concat", 00:08:21.281 "superblock": false, 00:08:21.281 "num_base_bdevs": 3, 00:08:21.281 "num_base_bdevs_discovered": 3, 00:08:21.281 "num_base_bdevs_operational": 3, 00:08:21.281 "base_bdevs_list": [ 00:08:21.281 { 00:08:21.281 "name": "BaseBdev1", 00:08:21.281 "uuid": "2fb61fb0-f0e1-4118-85b7-2a3a4598ced0", 00:08:21.281 "is_configured": true, 00:08:21.281 "data_offset": 0, 00:08:21.281 "data_size": 65536 00:08:21.281 }, 00:08:21.281 { 00:08:21.281 "name": "BaseBdev2", 00:08:21.281 "uuid": "0e3a42aa-7697-4e55-8f5e-f0cd18b5d6c1", 00:08:21.281 "is_configured": true, 00:08:21.281 "data_offset": 0, 00:08:21.281 "data_size": 65536 00:08:21.281 }, 00:08:21.281 { 00:08:21.281 "name": "BaseBdev3", 00:08:21.281 "uuid": "5bda0749-c1f3-4a28-991b-9695dec98ac7", 00:08:21.281 "is_configured": true, 00:08:21.281 "data_offset": 0, 00:08:21.281 "data_size": 65536 00:08:21.281 } 00:08:21.281 ] 00:08:21.281 } 00:08:21.281 } 00:08:21.281 }' 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:21.281 BaseBdev2 00:08:21.281 BaseBdev3' 00:08:21.281 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.558 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.558 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.558 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.558 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:21.558 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.558 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.558 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.559 [2024-11-28 02:23:55.132594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:21.559 [2024-11-28 02:23:55.132634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.559 [2024-11-28 02:23:55.132684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.559 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.826 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.826 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.826 "name": "Existed_Raid", 00:08:21.826 "uuid": "d7e8d3c0-3931-4431-833a-9d2df64a536b", 00:08:21.826 "strip_size_kb": 64, 00:08:21.826 "state": "offline", 00:08:21.826 "raid_level": "concat", 00:08:21.826 "superblock": false, 00:08:21.826 "num_base_bdevs": 3, 00:08:21.826 "num_base_bdevs_discovered": 2, 00:08:21.826 "num_base_bdevs_operational": 2, 00:08:21.826 "base_bdevs_list": [ 00:08:21.826 { 00:08:21.826 "name": null, 00:08:21.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.826 "is_configured": false, 00:08:21.826 "data_offset": 0, 00:08:21.826 "data_size": 65536 00:08:21.826 }, 00:08:21.826 { 00:08:21.826 "name": "BaseBdev2", 00:08:21.826 "uuid": "0e3a42aa-7697-4e55-8f5e-f0cd18b5d6c1", 00:08:21.826 "is_configured": true, 00:08:21.826 "data_offset": 0, 00:08:21.826 "data_size": 65536 00:08:21.826 }, 00:08:21.826 { 00:08:21.826 "name": "BaseBdev3", 00:08:21.826 "uuid": "5bda0749-c1f3-4a28-991b-9695dec98ac7", 00:08:21.826 "is_configured": true, 00:08:21.826 "data_offset": 0, 00:08:21.826 "data_size": 65536 00:08:21.826 } 00:08:21.826 ] 00:08:21.826 }' 00:08:21.826 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.826 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.085 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:22.085 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.086 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.086 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.086 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.086 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:22.086 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.086 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:22.086 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:22.086 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:22.086 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.086 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.086 [2024-11-28 02:23:55.704384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.345 [2024-11-28 02:23:55.849307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:22.345 [2024-11-28 02:23:55.849355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.345 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.605 BaseBdev2 00:08:22.605 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.606 [ 00:08:22.606 { 00:08:22.606 "name": "BaseBdev2", 00:08:22.606 "aliases": [ 00:08:22.606 "9135820b-5612-40c6-b627-e5911bbab020" 00:08:22.606 ], 00:08:22.606 "product_name": "Malloc disk", 00:08:22.606 "block_size": 512, 00:08:22.606 "num_blocks": 65536, 00:08:22.606 "uuid": "9135820b-5612-40c6-b627-e5911bbab020", 00:08:22.606 "assigned_rate_limits": { 00:08:22.606 "rw_ios_per_sec": 0, 00:08:22.606 "rw_mbytes_per_sec": 0, 00:08:22.606 "r_mbytes_per_sec": 0, 00:08:22.606 "w_mbytes_per_sec": 0 00:08:22.606 }, 00:08:22.606 "claimed": false, 00:08:22.606 "zoned": false, 00:08:22.606 "supported_io_types": { 00:08:22.606 "read": true, 00:08:22.606 "write": true, 00:08:22.606 "unmap": true, 00:08:22.606 "flush": true, 00:08:22.606 "reset": true, 00:08:22.606 "nvme_admin": false, 00:08:22.606 "nvme_io": false, 00:08:22.606 "nvme_io_md": false, 00:08:22.606 "write_zeroes": true, 00:08:22.606 "zcopy": true, 00:08:22.606 "get_zone_info": false, 00:08:22.606 "zone_management": false, 00:08:22.606 "zone_append": false, 00:08:22.606 "compare": false, 00:08:22.606 "compare_and_write": false, 00:08:22.606 "abort": true, 00:08:22.606 "seek_hole": false, 00:08:22.606 "seek_data": false, 00:08:22.606 "copy": true, 00:08:22.606 "nvme_iov_md": false 00:08:22.606 }, 00:08:22.606 "memory_domains": [ 00:08:22.606 { 00:08:22.606 "dma_device_id": "system", 00:08:22.606 "dma_device_type": 1 00:08:22.606 }, 00:08:22.606 { 00:08:22.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.606 "dma_device_type": 2 00:08:22.606 } 00:08:22.606 ], 00:08:22.606 "driver_specific": {} 00:08:22.606 } 00:08:22.606 ] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.606 BaseBdev3 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.606 [ 00:08:22.606 { 00:08:22.606 "name": "BaseBdev3", 00:08:22.606 "aliases": [ 00:08:22.606 "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80" 00:08:22.606 ], 00:08:22.606 "product_name": "Malloc disk", 00:08:22.606 "block_size": 512, 00:08:22.606 "num_blocks": 65536, 00:08:22.606 "uuid": "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80", 00:08:22.606 "assigned_rate_limits": { 00:08:22.606 "rw_ios_per_sec": 0, 00:08:22.606 "rw_mbytes_per_sec": 0, 00:08:22.606 "r_mbytes_per_sec": 0, 00:08:22.606 "w_mbytes_per_sec": 0 00:08:22.606 }, 00:08:22.606 "claimed": false, 00:08:22.606 "zoned": false, 00:08:22.606 "supported_io_types": { 00:08:22.606 "read": true, 00:08:22.606 "write": true, 00:08:22.606 "unmap": true, 00:08:22.606 "flush": true, 00:08:22.606 "reset": true, 00:08:22.606 "nvme_admin": false, 00:08:22.606 "nvme_io": false, 00:08:22.606 "nvme_io_md": false, 00:08:22.606 "write_zeroes": true, 00:08:22.606 "zcopy": true, 00:08:22.606 "get_zone_info": false, 00:08:22.606 "zone_management": false, 00:08:22.606 "zone_append": false, 00:08:22.606 "compare": false, 00:08:22.606 "compare_and_write": false, 00:08:22.606 "abort": true, 00:08:22.606 "seek_hole": false, 00:08:22.606 "seek_data": false, 00:08:22.606 "copy": true, 00:08:22.606 "nvme_iov_md": false 00:08:22.606 }, 00:08:22.606 "memory_domains": [ 00:08:22.606 { 00:08:22.606 "dma_device_id": "system", 00:08:22.606 "dma_device_type": 1 00:08:22.606 }, 00:08:22.606 { 00:08:22.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.606 "dma_device_type": 2 00:08:22.606 } 00:08:22.606 ], 00:08:22.606 "driver_specific": {} 00:08:22.606 } 00:08:22.606 ] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.606 [2024-11-28 02:23:56.158795] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.606 [2024-11-28 02:23:56.158837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.606 [2024-11-28 02:23:56.158857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.606 [2024-11-28 02:23:56.160568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.606 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.606 "name": "Existed_Raid", 00:08:22.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.606 "strip_size_kb": 64, 00:08:22.606 "state": "configuring", 00:08:22.606 "raid_level": "concat", 00:08:22.606 "superblock": false, 00:08:22.606 "num_base_bdevs": 3, 00:08:22.606 "num_base_bdevs_discovered": 2, 00:08:22.606 "num_base_bdevs_operational": 3, 00:08:22.606 "base_bdevs_list": [ 00:08:22.606 { 00:08:22.606 "name": "BaseBdev1", 00:08:22.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.606 "is_configured": false, 00:08:22.607 "data_offset": 0, 00:08:22.607 "data_size": 0 00:08:22.607 }, 00:08:22.607 { 00:08:22.607 "name": "BaseBdev2", 00:08:22.607 "uuid": "9135820b-5612-40c6-b627-e5911bbab020", 00:08:22.607 "is_configured": true, 00:08:22.607 "data_offset": 0, 00:08:22.607 "data_size": 65536 00:08:22.607 }, 00:08:22.607 { 00:08:22.607 "name": "BaseBdev3", 00:08:22.607 "uuid": "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80", 00:08:22.607 "is_configured": true, 00:08:22.607 "data_offset": 0, 00:08:22.607 "data_size": 65536 00:08:22.607 } 00:08:22.607 ] 00:08:22.607 }' 00:08:22.607 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.607 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.176 [2024-11-28 02:23:56.610046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.176 "name": "Existed_Raid", 00:08:23.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.176 "strip_size_kb": 64, 00:08:23.176 "state": "configuring", 00:08:23.176 "raid_level": "concat", 00:08:23.176 "superblock": false, 00:08:23.176 "num_base_bdevs": 3, 00:08:23.176 "num_base_bdevs_discovered": 1, 00:08:23.176 "num_base_bdevs_operational": 3, 00:08:23.176 "base_bdevs_list": [ 00:08:23.176 { 00:08:23.176 "name": "BaseBdev1", 00:08:23.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.176 "is_configured": false, 00:08:23.176 "data_offset": 0, 00:08:23.176 "data_size": 0 00:08:23.176 }, 00:08:23.176 { 00:08:23.176 "name": null, 00:08:23.176 "uuid": "9135820b-5612-40c6-b627-e5911bbab020", 00:08:23.176 "is_configured": false, 00:08:23.176 "data_offset": 0, 00:08:23.176 "data_size": 65536 00:08:23.176 }, 00:08:23.176 { 00:08:23.176 "name": "BaseBdev3", 00:08:23.176 "uuid": "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80", 00:08:23.176 "is_configured": true, 00:08:23.176 "data_offset": 0, 00:08:23.176 "data_size": 65536 00:08:23.176 } 00:08:23.176 ] 00:08:23.176 }' 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.176 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.436 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.436 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:23.436 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.436 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.436 [2024-11-28 02:23:57.072857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.436 BaseBdev1 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.436 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.436 [ 00:08:23.436 { 00:08:23.436 "name": "BaseBdev1", 00:08:23.436 "aliases": [ 00:08:23.436 "2bc2c4ad-a544-45e9-966d-0cddccc4e59e" 00:08:23.436 ], 00:08:23.436 "product_name": "Malloc disk", 00:08:23.436 "block_size": 512, 00:08:23.436 "num_blocks": 65536, 00:08:23.436 "uuid": "2bc2c4ad-a544-45e9-966d-0cddccc4e59e", 00:08:23.436 "assigned_rate_limits": { 00:08:23.436 "rw_ios_per_sec": 0, 00:08:23.436 "rw_mbytes_per_sec": 0, 00:08:23.436 "r_mbytes_per_sec": 0, 00:08:23.436 "w_mbytes_per_sec": 0 00:08:23.436 }, 00:08:23.436 "claimed": true, 00:08:23.436 "claim_type": "exclusive_write", 00:08:23.436 "zoned": false, 00:08:23.436 "supported_io_types": { 00:08:23.437 "read": true, 00:08:23.437 "write": true, 00:08:23.437 "unmap": true, 00:08:23.437 "flush": true, 00:08:23.437 "reset": true, 00:08:23.437 "nvme_admin": false, 00:08:23.437 "nvme_io": false, 00:08:23.437 "nvme_io_md": false, 00:08:23.437 "write_zeroes": true, 00:08:23.437 "zcopy": true, 00:08:23.437 "get_zone_info": false, 00:08:23.437 "zone_management": false, 00:08:23.437 "zone_append": false, 00:08:23.437 "compare": false, 00:08:23.437 "compare_and_write": false, 00:08:23.437 "abort": true, 00:08:23.437 "seek_hole": false, 00:08:23.437 "seek_data": false, 00:08:23.437 "copy": true, 00:08:23.437 "nvme_iov_md": false 00:08:23.437 }, 00:08:23.437 "memory_domains": [ 00:08:23.437 { 00:08:23.437 "dma_device_id": "system", 00:08:23.437 "dma_device_type": 1 00:08:23.437 }, 00:08:23.437 { 00:08:23.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.437 "dma_device_type": 2 00:08:23.437 } 00:08:23.437 ], 00:08:23.437 "driver_specific": {} 00:08:23.437 } 00:08:23.437 ] 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.437 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.696 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.696 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.696 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.696 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.696 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.696 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.696 "name": "Existed_Raid", 00:08:23.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.696 "strip_size_kb": 64, 00:08:23.696 "state": "configuring", 00:08:23.696 "raid_level": "concat", 00:08:23.696 "superblock": false, 00:08:23.696 "num_base_bdevs": 3, 00:08:23.696 "num_base_bdevs_discovered": 2, 00:08:23.696 "num_base_bdevs_operational": 3, 00:08:23.696 "base_bdevs_list": [ 00:08:23.696 { 00:08:23.696 "name": "BaseBdev1", 00:08:23.696 "uuid": "2bc2c4ad-a544-45e9-966d-0cddccc4e59e", 00:08:23.696 "is_configured": true, 00:08:23.696 "data_offset": 0, 00:08:23.696 "data_size": 65536 00:08:23.696 }, 00:08:23.696 { 00:08:23.696 "name": null, 00:08:23.696 "uuid": "9135820b-5612-40c6-b627-e5911bbab020", 00:08:23.696 "is_configured": false, 00:08:23.696 "data_offset": 0, 00:08:23.696 "data_size": 65536 00:08:23.696 }, 00:08:23.696 { 00:08:23.696 "name": "BaseBdev3", 00:08:23.696 "uuid": "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80", 00:08:23.696 "is_configured": true, 00:08:23.696 "data_offset": 0, 00:08:23.696 "data_size": 65536 00:08:23.696 } 00:08:23.696 ] 00:08:23.696 }' 00:08:23.696 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.696 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.956 [2024-11-28 02:23:57.588015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.956 "name": "Existed_Raid", 00:08:23.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.956 "strip_size_kb": 64, 00:08:23.956 "state": "configuring", 00:08:23.956 "raid_level": "concat", 00:08:23.956 "superblock": false, 00:08:23.956 "num_base_bdevs": 3, 00:08:23.956 "num_base_bdevs_discovered": 1, 00:08:23.956 "num_base_bdevs_operational": 3, 00:08:23.956 "base_bdevs_list": [ 00:08:23.956 { 00:08:23.956 "name": "BaseBdev1", 00:08:23.956 "uuid": "2bc2c4ad-a544-45e9-966d-0cddccc4e59e", 00:08:23.956 "is_configured": true, 00:08:23.956 "data_offset": 0, 00:08:23.956 "data_size": 65536 00:08:23.956 }, 00:08:23.956 { 00:08:23.956 "name": null, 00:08:23.956 "uuid": "9135820b-5612-40c6-b627-e5911bbab020", 00:08:23.956 "is_configured": false, 00:08:23.956 "data_offset": 0, 00:08:23.956 "data_size": 65536 00:08:23.956 }, 00:08:23.956 { 00:08:23.956 "name": null, 00:08:23.956 "uuid": "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80", 00:08:23.956 "is_configured": false, 00:08:23.956 "data_offset": 0, 00:08:23.956 "data_size": 65536 00:08:23.956 } 00:08:23.956 ] 00:08:23.956 }' 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.956 02:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.525 [2024-11-28 02:23:58.075285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.525 "name": "Existed_Raid", 00:08:24.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.525 "strip_size_kb": 64, 00:08:24.525 "state": "configuring", 00:08:24.525 "raid_level": "concat", 00:08:24.525 "superblock": false, 00:08:24.525 "num_base_bdevs": 3, 00:08:24.525 "num_base_bdevs_discovered": 2, 00:08:24.525 "num_base_bdevs_operational": 3, 00:08:24.525 "base_bdevs_list": [ 00:08:24.525 { 00:08:24.525 "name": "BaseBdev1", 00:08:24.525 "uuid": "2bc2c4ad-a544-45e9-966d-0cddccc4e59e", 00:08:24.525 "is_configured": true, 00:08:24.525 "data_offset": 0, 00:08:24.525 "data_size": 65536 00:08:24.525 }, 00:08:24.525 { 00:08:24.525 "name": null, 00:08:24.525 "uuid": "9135820b-5612-40c6-b627-e5911bbab020", 00:08:24.525 "is_configured": false, 00:08:24.525 "data_offset": 0, 00:08:24.525 "data_size": 65536 00:08:24.525 }, 00:08:24.525 { 00:08:24.525 "name": "BaseBdev3", 00:08:24.525 "uuid": "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80", 00:08:24.525 "is_configured": true, 00:08:24.525 "data_offset": 0, 00:08:24.525 "data_size": 65536 00:08:24.525 } 00:08:24.525 ] 00:08:24.525 }' 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.525 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.095 [2024-11-28 02:23:58.630331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.095 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.355 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.355 "name": "Existed_Raid", 00:08:25.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.355 "strip_size_kb": 64, 00:08:25.355 "state": "configuring", 00:08:25.355 "raid_level": "concat", 00:08:25.355 "superblock": false, 00:08:25.355 "num_base_bdevs": 3, 00:08:25.355 "num_base_bdevs_discovered": 1, 00:08:25.355 "num_base_bdevs_operational": 3, 00:08:25.355 "base_bdevs_list": [ 00:08:25.355 { 00:08:25.355 "name": null, 00:08:25.355 "uuid": "2bc2c4ad-a544-45e9-966d-0cddccc4e59e", 00:08:25.355 "is_configured": false, 00:08:25.355 "data_offset": 0, 00:08:25.355 "data_size": 65536 00:08:25.355 }, 00:08:25.355 { 00:08:25.355 "name": null, 00:08:25.355 "uuid": "9135820b-5612-40c6-b627-e5911bbab020", 00:08:25.355 "is_configured": false, 00:08:25.355 "data_offset": 0, 00:08:25.355 "data_size": 65536 00:08:25.355 }, 00:08:25.355 { 00:08:25.355 "name": "BaseBdev3", 00:08:25.355 "uuid": "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80", 00:08:25.355 "is_configured": true, 00:08:25.355 "data_offset": 0, 00:08:25.355 "data_size": 65536 00:08:25.355 } 00:08:25.355 ] 00:08:25.355 }' 00:08:25.355 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.355 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.615 [2024-11-28 02:23:59.165334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.615 "name": "Existed_Raid", 00:08:25.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.615 "strip_size_kb": 64, 00:08:25.615 "state": "configuring", 00:08:25.615 "raid_level": "concat", 00:08:25.615 "superblock": false, 00:08:25.615 "num_base_bdevs": 3, 00:08:25.615 "num_base_bdevs_discovered": 2, 00:08:25.615 "num_base_bdevs_operational": 3, 00:08:25.615 "base_bdevs_list": [ 00:08:25.615 { 00:08:25.615 "name": null, 00:08:25.615 "uuid": "2bc2c4ad-a544-45e9-966d-0cddccc4e59e", 00:08:25.615 "is_configured": false, 00:08:25.615 "data_offset": 0, 00:08:25.615 "data_size": 65536 00:08:25.615 }, 00:08:25.615 { 00:08:25.615 "name": "BaseBdev2", 00:08:25.615 "uuid": "9135820b-5612-40c6-b627-e5911bbab020", 00:08:25.615 "is_configured": true, 00:08:25.615 "data_offset": 0, 00:08:25.615 "data_size": 65536 00:08:25.615 }, 00:08:25.615 { 00:08:25.615 "name": "BaseBdev3", 00:08:25.615 "uuid": "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80", 00:08:25.615 "is_configured": true, 00:08:25.615 "data_offset": 0, 00:08:25.615 "data_size": 65536 00:08:25.615 } 00:08:25.615 ] 00:08:25.615 }' 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.615 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:26.184 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2bc2c4ad-a544-45e9-966d-0cddccc4e59e 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 [2024-11-28 02:23:59.749351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:26.185 [2024-11-28 02:23:59.749457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:26.185 [2024-11-28 02:23:59.749485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:26.185 [2024-11-28 02:23:59.749743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:26.185 [2024-11-28 02:23:59.749953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:26.185 [2024-11-28 02:23:59.750013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:26.185 [2024-11-28 02:23:59.750300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.185 NewBaseBdev 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 [ 00:08:26.185 { 00:08:26.185 "name": "NewBaseBdev", 00:08:26.185 "aliases": [ 00:08:26.185 "2bc2c4ad-a544-45e9-966d-0cddccc4e59e" 00:08:26.185 ], 00:08:26.185 "product_name": "Malloc disk", 00:08:26.185 "block_size": 512, 00:08:26.185 "num_blocks": 65536, 00:08:26.185 "uuid": "2bc2c4ad-a544-45e9-966d-0cddccc4e59e", 00:08:26.185 "assigned_rate_limits": { 00:08:26.185 "rw_ios_per_sec": 0, 00:08:26.185 "rw_mbytes_per_sec": 0, 00:08:26.185 "r_mbytes_per_sec": 0, 00:08:26.185 "w_mbytes_per_sec": 0 00:08:26.185 }, 00:08:26.185 "claimed": true, 00:08:26.185 "claim_type": "exclusive_write", 00:08:26.185 "zoned": false, 00:08:26.185 "supported_io_types": { 00:08:26.185 "read": true, 00:08:26.185 "write": true, 00:08:26.185 "unmap": true, 00:08:26.185 "flush": true, 00:08:26.185 "reset": true, 00:08:26.185 "nvme_admin": false, 00:08:26.185 "nvme_io": false, 00:08:26.185 "nvme_io_md": false, 00:08:26.185 "write_zeroes": true, 00:08:26.185 "zcopy": true, 00:08:26.185 "get_zone_info": false, 00:08:26.185 "zone_management": false, 00:08:26.185 "zone_append": false, 00:08:26.185 "compare": false, 00:08:26.185 "compare_and_write": false, 00:08:26.185 "abort": true, 00:08:26.185 "seek_hole": false, 00:08:26.185 "seek_data": false, 00:08:26.185 "copy": true, 00:08:26.185 "nvme_iov_md": false 00:08:26.185 }, 00:08:26.185 "memory_domains": [ 00:08:26.185 { 00:08:26.185 "dma_device_id": "system", 00:08:26.185 "dma_device_type": 1 00:08:26.185 }, 00:08:26.185 { 00:08:26.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.185 "dma_device_type": 2 00:08:26.185 } 00:08:26.185 ], 00:08:26.185 "driver_specific": {} 00:08:26.185 } 00:08:26.185 ] 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.185 "name": "Existed_Raid", 00:08:26.185 "uuid": "64c4557b-b07d-43e2-aa55-9075feb62f34", 00:08:26.185 "strip_size_kb": 64, 00:08:26.185 "state": "online", 00:08:26.185 "raid_level": "concat", 00:08:26.185 "superblock": false, 00:08:26.185 "num_base_bdevs": 3, 00:08:26.185 "num_base_bdevs_discovered": 3, 00:08:26.185 "num_base_bdevs_operational": 3, 00:08:26.185 "base_bdevs_list": [ 00:08:26.185 { 00:08:26.185 "name": "NewBaseBdev", 00:08:26.185 "uuid": "2bc2c4ad-a544-45e9-966d-0cddccc4e59e", 00:08:26.185 "is_configured": true, 00:08:26.185 "data_offset": 0, 00:08:26.185 "data_size": 65536 00:08:26.185 }, 00:08:26.185 { 00:08:26.185 "name": "BaseBdev2", 00:08:26.185 "uuid": "9135820b-5612-40c6-b627-e5911bbab020", 00:08:26.185 "is_configured": true, 00:08:26.185 "data_offset": 0, 00:08:26.185 "data_size": 65536 00:08:26.185 }, 00:08:26.185 { 00:08:26.185 "name": "BaseBdev3", 00:08:26.185 "uuid": "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80", 00:08:26.185 "is_configured": true, 00:08:26.185 "data_offset": 0, 00:08:26.185 "data_size": 65536 00:08:26.185 } 00:08:26.185 ] 00:08:26.185 }' 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.185 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.754 [2024-11-28 02:24:00.264816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.754 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.754 "name": "Existed_Raid", 00:08:26.754 "aliases": [ 00:08:26.754 "64c4557b-b07d-43e2-aa55-9075feb62f34" 00:08:26.754 ], 00:08:26.754 "product_name": "Raid Volume", 00:08:26.754 "block_size": 512, 00:08:26.754 "num_blocks": 196608, 00:08:26.754 "uuid": "64c4557b-b07d-43e2-aa55-9075feb62f34", 00:08:26.754 "assigned_rate_limits": { 00:08:26.754 "rw_ios_per_sec": 0, 00:08:26.754 "rw_mbytes_per_sec": 0, 00:08:26.754 "r_mbytes_per_sec": 0, 00:08:26.754 "w_mbytes_per_sec": 0 00:08:26.754 }, 00:08:26.754 "claimed": false, 00:08:26.754 "zoned": false, 00:08:26.754 "supported_io_types": { 00:08:26.754 "read": true, 00:08:26.754 "write": true, 00:08:26.754 "unmap": true, 00:08:26.754 "flush": true, 00:08:26.754 "reset": true, 00:08:26.754 "nvme_admin": false, 00:08:26.754 "nvme_io": false, 00:08:26.754 "nvme_io_md": false, 00:08:26.754 "write_zeroes": true, 00:08:26.754 "zcopy": false, 00:08:26.754 "get_zone_info": false, 00:08:26.754 "zone_management": false, 00:08:26.754 "zone_append": false, 00:08:26.754 "compare": false, 00:08:26.754 "compare_and_write": false, 00:08:26.754 "abort": false, 00:08:26.754 "seek_hole": false, 00:08:26.754 "seek_data": false, 00:08:26.754 "copy": false, 00:08:26.754 "nvme_iov_md": false 00:08:26.754 }, 00:08:26.754 "memory_domains": [ 00:08:26.754 { 00:08:26.754 "dma_device_id": "system", 00:08:26.754 "dma_device_type": 1 00:08:26.754 }, 00:08:26.754 { 00:08:26.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.754 "dma_device_type": 2 00:08:26.754 }, 00:08:26.754 { 00:08:26.754 "dma_device_id": "system", 00:08:26.754 "dma_device_type": 1 00:08:26.754 }, 00:08:26.754 { 00:08:26.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.754 "dma_device_type": 2 00:08:26.754 }, 00:08:26.754 { 00:08:26.754 "dma_device_id": "system", 00:08:26.754 "dma_device_type": 1 00:08:26.754 }, 00:08:26.754 { 00:08:26.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.754 "dma_device_type": 2 00:08:26.754 } 00:08:26.754 ], 00:08:26.754 "driver_specific": { 00:08:26.754 "raid": { 00:08:26.754 "uuid": "64c4557b-b07d-43e2-aa55-9075feb62f34", 00:08:26.754 "strip_size_kb": 64, 00:08:26.754 "state": "online", 00:08:26.754 "raid_level": "concat", 00:08:26.754 "superblock": false, 00:08:26.754 "num_base_bdevs": 3, 00:08:26.754 "num_base_bdevs_discovered": 3, 00:08:26.754 "num_base_bdevs_operational": 3, 00:08:26.754 "base_bdevs_list": [ 00:08:26.754 { 00:08:26.754 "name": "NewBaseBdev", 00:08:26.754 "uuid": "2bc2c4ad-a544-45e9-966d-0cddccc4e59e", 00:08:26.754 "is_configured": true, 00:08:26.754 "data_offset": 0, 00:08:26.754 "data_size": 65536 00:08:26.754 }, 00:08:26.754 { 00:08:26.754 "name": "BaseBdev2", 00:08:26.754 "uuid": "9135820b-5612-40c6-b627-e5911bbab020", 00:08:26.754 "is_configured": true, 00:08:26.754 "data_offset": 0, 00:08:26.754 "data_size": 65536 00:08:26.755 }, 00:08:26.755 { 00:08:26.755 "name": "BaseBdev3", 00:08:26.755 "uuid": "3e5fd68e-2fe2-42e4-91c4-e8d84985aa80", 00:08:26.755 "is_configured": true, 00:08:26.755 "data_offset": 0, 00:08:26.755 "data_size": 65536 00:08:26.755 } 00:08:26.755 ] 00:08:26.755 } 00:08:26.755 } 00:08:26.755 }' 00:08:26.755 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.755 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:26.755 BaseBdev2 00:08:26.755 BaseBdev3' 00:08:26.755 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.755 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.755 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.755 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.755 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:26.755 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.755 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.755 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.015 [2024-11-28 02:24:00.536044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.015 [2024-11-28 02:24:00.536111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.015 [2024-11-28 02:24:00.536205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.015 [2024-11-28 02:24:00.536262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.015 [2024-11-28 02:24:00.536275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65415 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65415 ']' 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65415 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65415 00:08:27.015 killing process with pid 65415 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65415' 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65415 00:08:27.015 [2024-11-28 02:24:00.581765] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.015 02:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65415 00:08:27.275 [2024-11-28 02:24:00.870266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.653 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:28.653 00:08:28.653 real 0m10.449s 00:08:28.653 user 0m16.739s 00:08:28.653 sys 0m1.765s 00:08:28.653 02:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.653 02:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.653 ************************************ 00:08:28.653 END TEST raid_state_function_test 00:08:28.653 ************************************ 00:08:28.653 02:24:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:28.653 02:24:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:28.653 02:24:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.653 02:24:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.653 ************************************ 00:08:28.653 START TEST raid_state_function_test_sb 00:08:28.653 ************************************ 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:28.653 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:28.654 Process raid pid: 66036 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66036 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66036' 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66036 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66036 ']' 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.654 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.654 [2024-11-28 02:24:02.137386] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:28.654 [2024-11-28 02:24:02.137518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.654 [2024-11-28 02:24:02.311444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.914 [2024-11-28 02:24:02.423561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.173 [2024-11-28 02:24:02.622630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.173 [2024-11-28 02:24:02.622663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 [2024-11-28 02:24:02.983379] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.432 [2024-11-28 02:24:02.983444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.432 [2024-11-28 02:24:02.983455] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.432 [2024-11-28 02:24:02.983466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.432 [2024-11-28 02:24:02.983473] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:29.432 [2024-11-28 02:24:02.983482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.432 02:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.432 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.432 "name": "Existed_Raid", 00:08:29.432 "uuid": "50a72126-ef34-48e1-8885-aa4a0ab3b19a", 00:08:29.432 "strip_size_kb": 64, 00:08:29.432 "state": "configuring", 00:08:29.432 "raid_level": "concat", 00:08:29.432 "superblock": true, 00:08:29.432 "num_base_bdevs": 3, 00:08:29.432 "num_base_bdevs_discovered": 0, 00:08:29.432 "num_base_bdevs_operational": 3, 00:08:29.432 "base_bdevs_list": [ 00:08:29.432 { 00:08:29.432 "name": "BaseBdev1", 00:08:29.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.432 "is_configured": false, 00:08:29.432 "data_offset": 0, 00:08:29.432 "data_size": 0 00:08:29.432 }, 00:08:29.432 { 00:08:29.432 "name": "BaseBdev2", 00:08:29.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.432 "is_configured": false, 00:08:29.432 "data_offset": 0, 00:08:29.432 "data_size": 0 00:08:29.432 }, 00:08:29.432 { 00:08:29.432 "name": "BaseBdev3", 00:08:29.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.432 "is_configured": false, 00:08:29.432 "data_offset": 0, 00:08:29.432 "data_size": 0 00:08:29.432 } 00:08:29.432 ] 00:08:29.432 }' 00:08:29.432 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.432 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.000 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.000 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.000 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.000 [2024-11-28 02:24:03.394597] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.000 [2024-11-28 02:24:03.394685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:30.000 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.000 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.000 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.000 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.000 [2024-11-28 02:24:03.406587] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.000 [2024-11-28 02:24:03.406684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.000 [2024-11-28 02:24:03.406713] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.000 [2024-11-28 02:24:03.406736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.000 [2024-11-28 02:24:03.406755] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.000 [2024-11-28 02:24:03.406777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.001 [2024-11-28 02:24:03.453727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.001 BaseBdev1 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.001 [ 00:08:30.001 { 00:08:30.001 "name": "BaseBdev1", 00:08:30.001 "aliases": [ 00:08:30.001 "9aecf2ab-f36f-4591-99ae-c852bfb85ca6" 00:08:30.001 ], 00:08:30.001 "product_name": "Malloc disk", 00:08:30.001 "block_size": 512, 00:08:30.001 "num_blocks": 65536, 00:08:30.001 "uuid": "9aecf2ab-f36f-4591-99ae-c852bfb85ca6", 00:08:30.001 "assigned_rate_limits": { 00:08:30.001 "rw_ios_per_sec": 0, 00:08:30.001 "rw_mbytes_per_sec": 0, 00:08:30.001 "r_mbytes_per_sec": 0, 00:08:30.001 "w_mbytes_per_sec": 0 00:08:30.001 }, 00:08:30.001 "claimed": true, 00:08:30.001 "claim_type": "exclusive_write", 00:08:30.001 "zoned": false, 00:08:30.001 "supported_io_types": { 00:08:30.001 "read": true, 00:08:30.001 "write": true, 00:08:30.001 "unmap": true, 00:08:30.001 "flush": true, 00:08:30.001 "reset": true, 00:08:30.001 "nvme_admin": false, 00:08:30.001 "nvme_io": false, 00:08:30.001 "nvme_io_md": false, 00:08:30.001 "write_zeroes": true, 00:08:30.001 "zcopy": true, 00:08:30.001 "get_zone_info": false, 00:08:30.001 "zone_management": false, 00:08:30.001 "zone_append": false, 00:08:30.001 "compare": false, 00:08:30.001 "compare_and_write": false, 00:08:30.001 "abort": true, 00:08:30.001 "seek_hole": false, 00:08:30.001 "seek_data": false, 00:08:30.001 "copy": true, 00:08:30.001 "nvme_iov_md": false 00:08:30.001 }, 00:08:30.001 "memory_domains": [ 00:08:30.001 { 00:08:30.001 "dma_device_id": "system", 00:08:30.001 "dma_device_type": 1 00:08:30.001 }, 00:08:30.001 { 00:08:30.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.001 "dma_device_type": 2 00:08:30.001 } 00:08:30.001 ], 00:08:30.001 "driver_specific": {} 00:08:30.001 } 00:08:30.001 ] 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.001 "name": "Existed_Raid", 00:08:30.001 "uuid": "7d837b65-da2a-49ef-8ca4-6a856bd76350", 00:08:30.001 "strip_size_kb": 64, 00:08:30.001 "state": "configuring", 00:08:30.001 "raid_level": "concat", 00:08:30.001 "superblock": true, 00:08:30.001 "num_base_bdevs": 3, 00:08:30.001 "num_base_bdevs_discovered": 1, 00:08:30.001 "num_base_bdevs_operational": 3, 00:08:30.001 "base_bdevs_list": [ 00:08:30.001 { 00:08:30.001 "name": "BaseBdev1", 00:08:30.001 "uuid": "9aecf2ab-f36f-4591-99ae-c852bfb85ca6", 00:08:30.001 "is_configured": true, 00:08:30.001 "data_offset": 2048, 00:08:30.001 "data_size": 63488 00:08:30.001 }, 00:08:30.001 { 00:08:30.001 "name": "BaseBdev2", 00:08:30.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.001 "is_configured": false, 00:08:30.001 "data_offset": 0, 00:08:30.001 "data_size": 0 00:08:30.001 }, 00:08:30.001 { 00:08:30.001 "name": "BaseBdev3", 00:08:30.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.001 "is_configured": false, 00:08:30.001 "data_offset": 0, 00:08:30.001 "data_size": 0 00:08:30.001 } 00:08:30.001 ] 00:08:30.001 }' 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.001 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.570 [2024-11-28 02:24:03.968901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.570 [2024-11-28 02:24:03.969023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.570 [2024-11-28 02:24:03.980925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.570 [2024-11-28 02:24:03.982721] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.570 [2024-11-28 02:24:03.982807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.570 [2024-11-28 02:24:03.982837] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.570 [2024-11-28 02:24:03.982860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.570 02:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.570 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.570 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.570 "name": "Existed_Raid", 00:08:30.570 "uuid": "3490c3fd-e948-427f-90d4-bc9489f66572", 00:08:30.570 "strip_size_kb": 64, 00:08:30.570 "state": "configuring", 00:08:30.570 "raid_level": "concat", 00:08:30.570 "superblock": true, 00:08:30.570 "num_base_bdevs": 3, 00:08:30.570 "num_base_bdevs_discovered": 1, 00:08:30.570 "num_base_bdevs_operational": 3, 00:08:30.570 "base_bdevs_list": [ 00:08:30.570 { 00:08:30.570 "name": "BaseBdev1", 00:08:30.570 "uuid": "9aecf2ab-f36f-4591-99ae-c852bfb85ca6", 00:08:30.570 "is_configured": true, 00:08:30.570 "data_offset": 2048, 00:08:30.570 "data_size": 63488 00:08:30.570 }, 00:08:30.570 { 00:08:30.570 "name": "BaseBdev2", 00:08:30.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.570 "is_configured": false, 00:08:30.570 "data_offset": 0, 00:08:30.570 "data_size": 0 00:08:30.570 }, 00:08:30.570 { 00:08:30.570 "name": "BaseBdev3", 00:08:30.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.570 "is_configured": false, 00:08:30.570 "data_offset": 0, 00:08:30.570 "data_size": 0 00:08:30.570 } 00:08:30.570 ] 00:08:30.570 }' 00:08:30.570 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.570 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.830 [2024-11-28 02:24:04.462661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.830 BaseBdev2 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.830 [ 00:08:30.830 { 00:08:30.830 "name": "BaseBdev2", 00:08:30.830 "aliases": [ 00:08:30.830 "97fb71a7-dffb-4ac4-b298-fcd8a0700980" 00:08:30.830 ], 00:08:30.830 "product_name": "Malloc disk", 00:08:30.830 "block_size": 512, 00:08:30.830 "num_blocks": 65536, 00:08:30.830 "uuid": "97fb71a7-dffb-4ac4-b298-fcd8a0700980", 00:08:30.830 "assigned_rate_limits": { 00:08:30.830 "rw_ios_per_sec": 0, 00:08:30.830 "rw_mbytes_per_sec": 0, 00:08:30.830 "r_mbytes_per_sec": 0, 00:08:30.830 "w_mbytes_per_sec": 0 00:08:30.830 }, 00:08:30.830 "claimed": true, 00:08:30.830 "claim_type": "exclusive_write", 00:08:30.830 "zoned": false, 00:08:30.830 "supported_io_types": { 00:08:30.830 "read": true, 00:08:30.830 "write": true, 00:08:30.830 "unmap": true, 00:08:30.830 "flush": true, 00:08:30.830 "reset": true, 00:08:30.830 "nvme_admin": false, 00:08:30.830 "nvme_io": false, 00:08:30.830 "nvme_io_md": false, 00:08:30.830 "write_zeroes": true, 00:08:30.830 "zcopy": true, 00:08:30.830 "get_zone_info": false, 00:08:30.830 "zone_management": false, 00:08:30.830 "zone_append": false, 00:08:30.830 "compare": false, 00:08:30.830 "compare_and_write": false, 00:08:30.830 "abort": true, 00:08:30.830 "seek_hole": false, 00:08:30.830 "seek_data": false, 00:08:30.830 "copy": true, 00:08:30.830 "nvme_iov_md": false 00:08:30.830 }, 00:08:30.830 "memory_domains": [ 00:08:30.830 { 00:08:30.830 "dma_device_id": "system", 00:08:30.830 "dma_device_type": 1 00:08:30.830 }, 00:08:30.830 { 00:08:30.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.830 "dma_device_type": 2 00:08:30.830 } 00:08:30.830 ], 00:08:30.830 "driver_specific": {} 00:08:30.830 } 00:08:30.830 ] 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.830 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.090 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.090 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.091 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.091 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.091 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.091 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.091 "name": "Existed_Raid", 00:08:31.091 "uuid": "3490c3fd-e948-427f-90d4-bc9489f66572", 00:08:31.091 "strip_size_kb": 64, 00:08:31.091 "state": "configuring", 00:08:31.091 "raid_level": "concat", 00:08:31.091 "superblock": true, 00:08:31.091 "num_base_bdevs": 3, 00:08:31.091 "num_base_bdevs_discovered": 2, 00:08:31.091 "num_base_bdevs_operational": 3, 00:08:31.091 "base_bdevs_list": [ 00:08:31.091 { 00:08:31.091 "name": "BaseBdev1", 00:08:31.091 "uuid": "9aecf2ab-f36f-4591-99ae-c852bfb85ca6", 00:08:31.091 "is_configured": true, 00:08:31.091 "data_offset": 2048, 00:08:31.091 "data_size": 63488 00:08:31.091 }, 00:08:31.091 { 00:08:31.091 "name": "BaseBdev2", 00:08:31.091 "uuid": "97fb71a7-dffb-4ac4-b298-fcd8a0700980", 00:08:31.091 "is_configured": true, 00:08:31.091 "data_offset": 2048, 00:08:31.091 "data_size": 63488 00:08:31.091 }, 00:08:31.091 { 00:08:31.091 "name": "BaseBdev3", 00:08:31.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.091 "is_configured": false, 00:08:31.091 "data_offset": 0, 00:08:31.091 "data_size": 0 00:08:31.091 } 00:08:31.091 ] 00:08:31.091 }' 00:08:31.091 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.091 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.350 02:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:31.350 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.350 02:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.350 [2024-11-28 02:24:05.008188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.350 [2024-11-28 02:24:05.008534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:31.350 [2024-11-28 02:24:05.008599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:31.350 [2024-11-28 02:24:05.008880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:31.350 [2024-11-28 02:24:05.009109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:31.350 BaseBdev3 00:08:31.350 [2024-11-28 02:24:05.009155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:31.350 [2024-11-28 02:24:05.009316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.350 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.609 [ 00:08:31.609 { 00:08:31.609 "name": "BaseBdev3", 00:08:31.609 "aliases": [ 00:08:31.609 "712f200b-ed29-42f2-999c-21d8181328ac" 00:08:31.609 ], 00:08:31.609 "product_name": "Malloc disk", 00:08:31.609 "block_size": 512, 00:08:31.609 "num_blocks": 65536, 00:08:31.609 "uuid": "712f200b-ed29-42f2-999c-21d8181328ac", 00:08:31.609 "assigned_rate_limits": { 00:08:31.609 "rw_ios_per_sec": 0, 00:08:31.609 "rw_mbytes_per_sec": 0, 00:08:31.609 "r_mbytes_per_sec": 0, 00:08:31.609 "w_mbytes_per_sec": 0 00:08:31.609 }, 00:08:31.609 "claimed": true, 00:08:31.609 "claim_type": "exclusive_write", 00:08:31.609 "zoned": false, 00:08:31.609 "supported_io_types": { 00:08:31.609 "read": true, 00:08:31.609 "write": true, 00:08:31.609 "unmap": true, 00:08:31.609 "flush": true, 00:08:31.609 "reset": true, 00:08:31.609 "nvme_admin": false, 00:08:31.609 "nvme_io": false, 00:08:31.609 "nvme_io_md": false, 00:08:31.609 "write_zeroes": true, 00:08:31.609 "zcopy": true, 00:08:31.609 "get_zone_info": false, 00:08:31.609 "zone_management": false, 00:08:31.609 "zone_append": false, 00:08:31.609 "compare": false, 00:08:31.609 "compare_and_write": false, 00:08:31.609 "abort": true, 00:08:31.609 "seek_hole": false, 00:08:31.609 "seek_data": false, 00:08:31.609 "copy": true, 00:08:31.609 "nvme_iov_md": false 00:08:31.609 }, 00:08:31.609 "memory_domains": [ 00:08:31.609 { 00:08:31.609 "dma_device_id": "system", 00:08:31.609 "dma_device_type": 1 00:08:31.609 }, 00:08:31.609 { 00:08:31.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.609 "dma_device_type": 2 00:08:31.609 } 00:08:31.609 ], 00:08:31.609 "driver_specific": {} 00:08:31.609 } 00:08:31.609 ] 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.609 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.609 "name": "Existed_Raid", 00:08:31.609 "uuid": "3490c3fd-e948-427f-90d4-bc9489f66572", 00:08:31.609 "strip_size_kb": 64, 00:08:31.609 "state": "online", 00:08:31.609 "raid_level": "concat", 00:08:31.609 "superblock": true, 00:08:31.610 "num_base_bdevs": 3, 00:08:31.610 "num_base_bdevs_discovered": 3, 00:08:31.610 "num_base_bdevs_operational": 3, 00:08:31.610 "base_bdevs_list": [ 00:08:31.610 { 00:08:31.610 "name": "BaseBdev1", 00:08:31.610 "uuid": "9aecf2ab-f36f-4591-99ae-c852bfb85ca6", 00:08:31.610 "is_configured": true, 00:08:31.610 "data_offset": 2048, 00:08:31.610 "data_size": 63488 00:08:31.610 }, 00:08:31.610 { 00:08:31.610 "name": "BaseBdev2", 00:08:31.610 "uuid": "97fb71a7-dffb-4ac4-b298-fcd8a0700980", 00:08:31.610 "is_configured": true, 00:08:31.610 "data_offset": 2048, 00:08:31.610 "data_size": 63488 00:08:31.610 }, 00:08:31.610 { 00:08:31.610 "name": "BaseBdev3", 00:08:31.610 "uuid": "712f200b-ed29-42f2-999c-21d8181328ac", 00:08:31.610 "is_configured": true, 00:08:31.610 "data_offset": 2048, 00:08:31.610 "data_size": 63488 00:08:31.610 } 00:08:31.610 ] 00:08:31.610 }' 00:08:31.610 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.610 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.869 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:31.869 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:31.869 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.869 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.869 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.869 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.869 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.869 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:31.869 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.869 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.869 [2024-11-28 02:24:05.531647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.130 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.130 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.130 "name": "Existed_Raid", 00:08:32.130 "aliases": [ 00:08:32.130 "3490c3fd-e948-427f-90d4-bc9489f66572" 00:08:32.130 ], 00:08:32.130 "product_name": "Raid Volume", 00:08:32.130 "block_size": 512, 00:08:32.130 "num_blocks": 190464, 00:08:32.130 "uuid": "3490c3fd-e948-427f-90d4-bc9489f66572", 00:08:32.130 "assigned_rate_limits": { 00:08:32.130 "rw_ios_per_sec": 0, 00:08:32.130 "rw_mbytes_per_sec": 0, 00:08:32.130 "r_mbytes_per_sec": 0, 00:08:32.130 "w_mbytes_per_sec": 0 00:08:32.130 }, 00:08:32.130 "claimed": false, 00:08:32.130 "zoned": false, 00:08:32.130 "supported_io_types": { 00:08:32.130 "read": true, 00:08:32.130 "write": true, 00:08:32.130 "unmap": true, 00:08:32.130 "flush": true, 00:08:32.130 "reset": true, 00:08:32.130 "nvme_admin": false, 00:08:32.130 "nvme_io": false, 00:08:32.130 "nvme_io_md": false, 00:08:32.130 "write_zeroes": true, 00:08:32.130 "zcopy": false, 00:08:32.130 "get_zone_info": false, 00:08:32.130 "zone_management": false, 00:08:32.130 "zone_append": false, 00:08:32.130 "compare": false, 00:08:32.130 "compare_and_write": false, 00:08:32.130 "abort": false, 00:08:32.130 "seek_hole": false, 00:08:32.130 "seek_data": false, 00:08:32.130 "copy": false, 00:08:32.130 "nvme_iov_md": false 00:08:32.130 }, 00:08:32.130 "memory_domains": [ 00:08:32.130 { 00:08:32.130 "dma_device_id": "system", 00:08:32.130 "dma_device_type": 1 00:08:32.130 }, 00:08:32.130 { 00:08:32.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.130 "dma_device_type": 2 00:08:32.130 }, 00:08:32.130 { 00:08:32.130 "dma_device_id": "system", 00:08:32.130 "dma_device_type": 1 00:08:32.130 }, 00:08:32.130 { 00:08:32.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.130 "dma_device_type": 2 00:08:32.130 }, 00:08:32.130 { 00:08:32.130 "dma_device_id": "system", 00:08:32.130 "dma_device_type": 1 00:08:32.130 }, 00:08:32.130 { 00:08:32.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.130 "dma_device_type": 2 00:08:32.130 } 00:08:32.130 ], 00:08:32.130 "driver_specific": { 00:08:32.130 "raid": { 00:08:32.130 "uuid": "3490c3fd-e948-427f-90d4-bc9489f66572", 00:08:32.130 "strip_size_kb": 64, 00:08:32.130 "state": "online", 00:08:32.130 "raid_level": "concat", 00:08:32.130 "superblock": true, 00:08:32.130 "num_base_bdevs": 3, 00:08:32.130 "num_base_bdevs_discovered": 3, 00:08:32.130 "num_base_bdevs_operational": 3, 00:08:32.130 "base_bdevs_list": [ 00:08:32.130 { 00:08:32.130 "name": "BaseBdev1", 00:08:32.130 "uuid": "9aecf2ab-f36f-4591-99ae-c852bfb85ca6", 00:08:32.130 "is_configured": true, 00:08:32.130 "data_offset": 2048, 00:08:32.130 "data_size": 63488 00:08:32.130 }, 00:08:32.130 { 00:08:32.130 "name": "BaseBdev2", 00:08:32.130 "uuid": "97fb71a7-dffb-4ac4-b298-fcd8a0700980", 00:08:32.130 "is_configured": true, 00:08:32.130 "data_offset": 2048, 00:08:32.130 "data_size": 63488 00:08:32.130 }, 00:08:32.130 { 00:08:32.130 "name": "BaseBdev3", 00:08:32.130 "uuid": "712f200b-ed29-42f2-999c-21d8181328ac", 00:08:32.130 "is_configured": true, 00:08:32.130 "data_offset": 2048, 00:08:32.130 "data_size": 63488 00:08:32.130 } 00:08:32.130 ] 00:08:32.130 } 00:08:32.130 } 00:08:32.130 }' 00:08:32.130 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.130 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:32.131 BaseBdev2 00:08:32.131 BaseBdev3' 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.131 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.131 [2024-11-28 02:24:05.802929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:32.131 [2024-11-28 02:24:05.802956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.131 [2024-11-28 02:24:05.803010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.392 "name": "Existed_Raid", 00:08:32.392 "uuid": "3490c3fd-e948-427f-90d4-bc9489f66572", 00:08:32.392 "strip_size_kb": 64, 00:08:32.392 "state": "offline", 00:08:32.392 "raid_level": "concat", 00:08:32.392 "superblock": true, 00:08:32.392 "num_base_bdevs": 3, 00:08:32.392 "num_base_bdevs_discovered": 2, 00:08:32.392 "num_base_bdevs_operational": 2, 00:08:32.392 "base_bdevs_list": [ 00:08:32.392 { 00:08:32.392 "name": null, 00:08:32.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.392 "is_configured": false, 00:08:32.392 "data_offset": 0, 00:08:32.392 "data_size": 63488 00:08:32.392 }, 00:08:32.392 { 00:08:32.392 "name": "BaseBdev2", 00:08:32.392 "uuid": "97fb71a7-dffb-4ac4-b298-fcd8a0700980", 00:08:32.392 "is_configured": true, 00:08:32.392 "data_offset": 2048, 00:08:32.392 "data_size": 63488 00:08:32.392 }, 00:08:32.392 { 00:08:32.392 "name": "BaseBdev3", 00:08:32.392 "uuid": "712f200b-ed29-42f2-999c-21d8181328ac", 00:08:32.392 "is_configured": true, 00:08:32.392 "data_offset": 2048, 00:08:32.392 "data_size": 63488 00:08:32.392 } 00:08:32.392 ] 00:08:32.392 }' 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.392 02:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.962 [2024-11-28 02:24:06.404983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.962 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.962 [2024-11-28 02:24:06.554955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:32.962 [2024-11-28 02:24:06.555009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.223 BaseBdev2 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.223 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.223 [ 00:08:33.223 { 00:08:33.223 "name": "BaseBdev2", 00:08:33.223 "aliases": [ 00:08:33.224 "20db17af-f1f3-4e85-a174-8ef1a740b066" 00:08:33.224 ], 00:08:33.224 "product_name": "Malloc disk", 00:08:33.224 "block_size": 512, 00:08:33.224 "num_blocks": 65536, 00:08:33.224 "uuid": "20db17af-f1f3-4e85-a174-8ef1a740b066", 00:08:33.224 "assigned_rate_limits": { 00:08:33.224 "rw_ios_per_sec": 0, 00:08:33.224 "rw_mbytes_per_sec": 0, 00:08:33.224 "r_mbytes_per_sec": 0, 00:08:33.224 "w_mbytes_per_sec": 0 00:08:33.224 }, 00:08:33.224 "claimed": false, 00:08:33.224 "zoned": false, 00:08:33.224 "supported_io_types": { 00:08:33.224 "read": true, 00:08:33.224 "write": true, 00:08:33.224 "unmap": true, 00:08:33.224 "flush": true, 00:08:33.224 "reset": true, 00:08:33.224 "nvme_admin": false, 00:08:33.224 "nvme_io": false, 00:08:33.224 "nvme_io_md": false, 00:08:33.224 "write_zeroes": true, 00:08:33.224 "zcopy": true, 00:08:33.224 "get_zone_info": false, 00:08:33.224 "zone_management": false, 00:08:33.224 "zone_append": false, 00:08:33.224 "compare": false, 00:08:33.224 "compare_and_write": false, 00:08:33.224 "abort": true, 00:08:33.224 "seek_hole": false, 00:08:33.224 "seek_data": false, 00:08:33.224 "copy": true, 00:08:33.224 "nvme_iov_md": false 00:08:33.224 }, 00:08:33.224 "memory_domains": [ 00:08:33.224 { 00:08:33.224 "dma_device_id": "system", 00:08:33.224 "dma_device_type": 1 00:08:33.224 }, 00:08:33.224 { 00:08:33.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.224 "dma_device_type": 2 00:08:33.224 } 00:08:33.224 ], 00:08:33.224 "driver_specific": {} 00:08:33.224 } 00:08:33.224 ] 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.224 BaseBdev3 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.224 [ 00:08:33.224 { 00:08:33.224 "name": "BaseBdev3", 00:08:33.224 "aliases": [ 00:08:33.224 "2c12397b-2fe8-4e7f-9271-065d67495c93" 00:08:33.224 ], 00:08:33.224 "product_name": "Malloc disk", 00:08:33.224 "block_size": 512, 00:08:33.224 "num_blocks": 65536, 00:08:33.224 "uuid": "2c12397b-2fe8-4e7f-9271-065d67495c93", 00:08:33.224 "assigned_rate_limits": { 00:08:33.224 "rw_ios_per_sec": 0, 00:08:33.224 "rw_mbytes_per_sec": 0, 00:08:33.224 "r_mbytes_per_sec": 0, 00:08:33.224 "w_mbytes_per_sec": 0 00:08:33.224 }, 00:08:33.224 "claimed": false, 00:08:33.224 "zoned": false, 00:08:33.224 "supported_io_types": { 00:08:33.224 "read": true, 00:08:33.224 "write": true, 00:08:33.224 "unmap": true, 00:08:33.224 "flush": true, 00:08:33.224 "reset": true, 00:08:33.224 "nvme_admin": false, 00:08:33.224 "nvme_io": false, 00:08:33.224 "nvme_io_md": false, 00:08:33.224 "write_zeroes": true, 00:08:33.224 "zcopy": true, 00:08:33.224 "get_zone_info": false, 00:08:33.224 "zone_management": false, 00:08:33.224 "zone_append": false, 00:08:33.224 "compare": false, 00:08:33.224 "compare_and_write": false, 00:08:33.224 "abort": true, 00:08:33.224 "seek_hole": false, 00:08:33.224 "seek_data": false, 00:08:33.224 "copy": true, 00:08:33.224 "nvme_iov_md": false 00:08:33.224 }, 00:08:33.224 "memory_domains": [ 00:08:33.224 { 00:08:33.224 "dma_device_id": "system", 00:08:33.224 "dma_device_type": 1 00:08:33.224 }, 00:08:33.224 { 00:08:33.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.224 "dma_device_type": 2 00:08:33.224 } 00:08:33.224 ], 00:08:33.224 "driver_specific": {} 00:08:33.224 } 00:08:33.224 ] 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.224 [2024-11-28 02:24:06.861544] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.224 [2024-11-28 02:24:06.861579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.224 [2024-11-28 02:24:06.861599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.224 [2024-11-28 02:24:06.863282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.224 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.484 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.484 "name": "Existed_Raid", 00:08:33.484 "uuid": "4dc3551b-435b-4494-90f0-60ec4deb54e2", 00:08:33.484 "strip_size_kb": 64, 00:08:33.484 "state": "configuring", 00:08:33.484 "raid_level": "concat", 00:08:33.484 "superblock": true, 00:08:33.484 "num_base_bdevs": 3, 00:08:33.484 "num_base_bdevs_discovered": 2, 00:08:33.484 "num_base_bdevs_operational": 3, 00:08:33.484 "base_bdevs_list": [ 00:08:33.484 { 00:08:33.484 "name": "BaseBdev1", 00:08:33.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.484 "is_configured": false, 00:08:33.484 "data_offset": 0, 00:08:33.484 "data_size": 0 00:08:33.484 }, 00:08:33.484 { 00:08:33.484 "name": "BaseBdev2", 00:08:33.484 "uuid": "20db17af-f1f3-4e85-a174-8ef1a740b066", 00:08:33.484 "is_configured": true, 00:08:33.484 "data_offset": 2048, 00:08:33.484 "data_size": 63488 00:08:33.484 }, 00:08:33.484 { 00:08:33.484 "name": "BaseBdev3", 00:08:33.484 "uuid": "2c12397b-2fe8-4e7f-9271-065d67495c93", 00:08:33.484 "is_configured": true, 00:08:33.484 "data_offset": 2048, 00:08:33.484 "data_size": 63488 00:08:33.484 } 00:08:33.484 ] 00:08:33.484 }' 00:08:33.484 02:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.484 02:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.743 [2024-11-28 02:24:07.292859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.743 "name": "Existed_Raid", 00:08:33.743 "uuid": "4dc3551b-435b-4494-90f0-60ec4deb54e2", 00:08:33.743 "strip_size_kb": 64, 00:08:33.743 "state": "configuring", 00:08:33.743 "raid_level": "concat", 00:08:33.743 "superblock": true, 00:08:33.743 "num_base_bdevs": 3, 00:08:33.743 "num_base_bdevs_discovered": 1, 00:08:33.743 "num_base_bdevs_operational": 3, 00:08:33.743 "base_bdevs_list": [ 00:08:33.743 { 00:08:33.743 "name": "BaseBdev1", 00:08:33.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.743 "is_configured": false, 00:08:33.743 "data_offset": 0, 00:08:33.743 "data_size": 0 00:08:33.743 }, 00:08:33.743 { 00:08:33.743 "name": null, 00:08:33.743 "uuid": "20db17af-f1f3-4e85-a174-8ef1a740b066", 00:08:33.743 "is_configured": false, 00:08:33.743 "data_offset": 0, 00:08:33.743 "data_size": 63488 00:08:33.743 }, 00:08:33.743 { 00:08:33.743 "name": "BaseBdev3", 00:08:33.743 "uuid": "2c12397b-2fe8-4e7f-9271-065d67495c93", 00:08:33.743 "is_configured": true, 00:08:33.743 "data_offset": 2048, 00:08:33.743 "data_size": 63488 00:08:33.743 } 00:08:33.743 ] 00:08:33.743 }' 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.743 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.312 [2024-11-28 02:24:07.816606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.312 BaseBdev1 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:34.312 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.313 [ 00:08:34.313 { 00:08:34.313 "name": "BaseBdev1", 00:08:34.313 "aliases": [ 00:08:34.313 "66ad8c8c-a631-4420-8b98-744b2c84e7c7" 00:08:34.313 ], 00:08:34.313 "product_name": "Malloc disk", 00:08:34.313 "block_size": 512, 00:08:34.313 "num_blocks": 65536, 00:08:34.313 "uuid": "66ad8c8c-a631-4420-8b98-744b2c84e7c7", 00:08:34.313 "assigned_rate_limits": { 00:08:34.313 "rw_ios_per_sec": 0, 00:08:34.313 "rw_mbytes_per_sec": 0, 00:08:34.313 "r_mbytes_per_sec": 0, 00:08:34.313 "w_mbytes_per_sec": 0 00:08:34.313 }, 00:08:34.313 "claimed": true, 00:08:34.313 "claim_type": "exclusive_write", 00:08:34.313 "zoned": false, 00:08:34.313 "supported_io_types": { 00:08:34.313 "read": true, 00:08:34.313 "write": true, 00:08:34.313 "unmap": true, 00:08:34.313 "flush": true, 00:08:34.313 "reset": true, 00:08:34.313 "nvme_admin": false, 00:08:34.313 "nvme_io": false, 00:08:34.313 "nvme_io_md": false, 00:08:34.313 "write_zeroes": true, 00:08:34.313 "zcopy": true, 00:08:34.313 "get_zone_info": false, 00:08:34.313 "zone_management": false, 00:08:34.313 "zone_append": false, 00:08:34.313 "compare": false, 00:08:34.313 "compare_and_write": false, 00:08:34.313 "abort": true, 00:08:34.313 "seek_hole": false, 00:08:34.313 "seek_data": false, 00:08:34.313 "copy": true, 00:08:34.313 "nvme_iov_md": false 00:08:34.313 }, 00:08:34.313 "memory_domains": [ 00:08:34.313 { 00:08:34.313 "dma_device_id": "system", 00:08:34.313 "dma_device_type": 1 00:08:34.313 }, 00:08:34.313 { 00:08:34.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.313 "dma_device_type": 2 00:08:34.313 } 00:08:34.313 ], 00:08:34.313 "driver_specific": {} 00:08:34.313 } 00:08:34.313 ] 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.313 "name": "Existed_Raid", 00:08:34.313 "uuid": "4dc3551b-435b-4494-90f0-60ec4deb54e2", 00:08:34.313 "strip_size_kb": 64, 00:08:34.313 "state": "configuring", 00:08:34.313 "raid_level": "concat", 00:08:34.313 "superblock": true, 00:08:34.313 "num_base_bdevs": 3, 00:08:34.313 "num_base_bdevs_discovered": 2, 00:08:34.313 "num_base_bdevs_operational": 3, 00:08:34.313 "base_bdevs_list": [ 00:08:34.313 { 00:08:34.313 "name": "BaseBdev1", 00:08:34.313 "uuid": "66ad8c8c-a631-4420-8b98-744b2c84e7c7", 00:08:34.313 "is_configured": true, 00:08:34.313 "data_offset": 2048, 00:08:34.313 "data_size": 63488 00:08:34.313 }, 00:08:34.313 { 00:08:34.313 "name": null, 00:08:34.313 "uuid": "20db17af-f1f3-4e85-a174-8ef1a740b066", 00:08:34.313 "is_configured": false, 00:08:34.313 "data_offset": 0, 00:08:34.313 "data_size": 63488 00:08:34.313 }, 00:08:34.313 { 00:08:34.313 "name": "BaseBdev3", 00:08:34.313 "uuid": "2c12397b-2fe8-4e7f-9271-065d67495c93", 00:08:34.313 "is_configured": true, 00:08:34.313 "data_offset": 2048, 00:08:34.313 "data_size": 63488 00:08:34.313 } 00:08:34.313 ] 00:08:34.313 }' 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.313 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.883 [2024-11-28 02:24:08.399667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.883 "name": "Existed_Raid", 00:08:34.883 "uuid": "4dc3551b-435b-4494-90f0-60ec4deb54e2", 00:08:34.883 "strip_size_kb": 64, 00:08:34.883 "state": "configuring", 00:08:34.883 "raid_level": "concat", 00:08:34.883 "superblock": true, 00:08:34.883 "num_base_bdevs": 3, 00:08:34.883 "num_base_bdevs_discovered": 1, 00:08:34.883 "num_base_bdevs_operational": 3, 00:08:34.883 "base_bdevs_list": [ 00:08:34.883 { 00:08:34.883 "name": "BaseBdev1", 00:08:34.883 "uuid": "66ad8c8c-a631-4420-8b98-744b2c84e7c7", 00:08:34.883 "is_configured": true, 00:08:34.883 "data_offset": 2048, 00:08:34.883 "data_size": 63488 00:08:34.883 }, 00:08:34.883 { 00:08:34.883 "name": null, 00:08:34.883 "uuid": "20db17af-f1f3-4e85-a174-8ef1a740b066", 00:08:34.883 "is_configured": false, 00:08:34.883 "data_offset": 0, 00:08:34.883 "data_size": 63488 00:08:34.883 }, 00:08:34.883 { 00:08:34.883 "name": null, 00:08:34.883 "uuid": "2c12397b-2fe8-4e7f-9271-065d67495c93", 00:08:34.883 "is_configured": false, 00:08:34.883 "data_offset": 0, 00:08:34.883 "data_size": 63488 00:08:34.883 } 00:08:34.883 ] 00:08:34.883 }' 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.883 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.453 [2024-11-28 02:24:08.910910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.453 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.453 "name": "Existed_Raid", 00:08:35.453 "uuid": "4dc3551b-435b-4494-90f0-60ec4deb54e2", 00:08:35.453 "strip_size_kb": 64, 00:08:35.453 "state": "configuring", 00:08:35.453 "raid_level": "concat", 00:08:35.453 "superblock": true, 00:08:35.453 "num_base_bdevs": 3, 00:08:35.453 "num_base_bdevs_discovered": 2, 00:08:35.453 "num_base_bdevs_operational": 3, 00:08:35.453 "base_bdevs_list": [ 00:08:35.453 { 00:08:35.453 "name": "BaseBdev1", 00:08:35.453 "uuid": "66ad8c8c-a631-4420-8b98-744b2c84e7c7", 00:08:35.454 "is_configured": true, 00:08:35.454 "data_offset": 2048, 00:08:35.454 "data_size": 63488 00:08:35.454 }, 00:08:35.454 { 00:08:35.454 "name": null, 00:08:35.454 "uuid": "20db17af-f1f3-4e85-a174-8ef1a740b066", 00:08:35.454 "is_configured": false, 00:08:35.454 "data_offset": 0, 00:08:35.454 "data_size": 63488 00:08:35.454 }, 00:08:35.454 { 00:08:35.454 "name": "BaseBdev3", 00:08:35.454 "uuid": "2c12397b-2fe8-4e7f-9271-065d67495c93", 00:08:35.454 "is_configured": true, 00:08:35.454 "data_offset": 2048, 00:08:35.454 "data_size": 63488 00:08:35.454 } 00:08:35.454 ] 00:08:35.454 }' 00:08:35.454 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.454 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.713 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.713 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.713 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.713 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.973 [2024-11-28 02:24:09.430093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.973 "name": "Existed_Raid", 00:08:35.973 "uuid": "4dc3551b-435b-4494-90f0-60ec4deb54e2", 00:08:35.973 "strip_size_kb": 64, 00:08:35.973 "state": "configuring", 00:08:35.973 "raid_level": "concat", 00:08:35.973 "superblock": true, 00:08:35.973 "num_base_bdevs": 3, 00:08:35.973 "num_base_bdevs_discovered": 1, 00:08:35.973 "num_base_bdevs_operational": 3, 00:08:35.973 "base_bdevs_list": [ 00:08:35.973 { 00:08:35.973 "name": null, 00:08:35.973 "uuid": "66ad8c8c-a631-4420-8b98-744b2c84e7c7", 00:08:35.973 "is_configured": false, 00:08:35.973 "data_offset": 0, 00:08:35.973 "data_size": 63488 00:08:35.973 }, 00:08:35.973 { 00:08:35.973 "name": null, 00:08:35.973 "uuid": "20db17af-f1f3-4e85-a174-8ef1a740b066", 00:08:35.973 "is_configured": false, 00:08:35.973 "data_offset": 0, 00:08:35.973 "data_size": 63488 00:08:35.973 }, 00:08:35.973 { 00:08:35.973 "name": "BaseBdev3", 00:08:35.973 "uuid": "2c12397b-2fe8-4e7f-9271-065d67495c93", 00:08:35.973 "is_configured": true, 00:08:35.973 "data_offset": 2048, 00:08:35.973 "data_size": 63488 00:08:35.973 } 00:08:35.973 ] 00:08:35.973 }' 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.973 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.543 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.543 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:36.543 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 [2024-11-28 02:24:10.013533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.543 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.543 "name": "Existed_Raid", 00:08:36.543 "uuid": "4dc3551b-435b-4494-90f0-60ec4deb54e2", 00:08:36.543 "strip_size_kb": 64, 00:08:36.543 "state": "configuring", 00:08:36.543 "raid_level": "concat", 00:08:36.543 "superblock": true, 00:08:36.543 "num_base_bdevs": 3, 00:08:36.543 "num_base_bdevs_discovered": 2, 00:08:36.543 "num_base_bdevs_operational": 3, 00:08:36.543 "base_bdevs_list": [ 00:08:36.543 { 00:08:36.543 "name": null, 00:08:36.543 "uuid": "66ad8c8c-a631-4420-8b98-744b2c84e7c7", 00:08:36.543 "is_configured": false, 00:08:36.543 "data_offset": 0, 00:08:36.543 "data_size": 63488 00:08:36.543 }, 00:08:36.543 { 00:08:36.543 "name": "BaseBdev2", 00:08:36.543 "uuid": "20db17af-f1f3-4e85-a174-8ef1a740b066", 00:08:36.543 "is_configured": true, 00:08:36.543 "data_offset": 2048, 00:08:36.543 "data_size": 63488 00:08:36.543 }, 00:08:36.543 { 00:08:36.543 "name": "BaseBdev3", 00:08:36.543 "uuid": "2c12397b-2fe8-4e7f-9271-065d67495c93", 00:08:36.543 "is_configured": true, 00:08:36.543 "data_offset": 2048, 00:08:36.544 "data_size": 63488 00:08:36.544 } 00:08:36.544 ] 00:08:36.544 }' 00:08:36.544 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.544 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.803 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.803 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.803 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.803 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:36.803 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 66ad8c8c-a631-4420-8b98-744b2c84e7c7 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.072 [2024-11-28 02:24:10.576779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:37.072 [2024-11-28 02:24:10.577032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:37.072 [2024-11-28 02:24:10.577050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:37.072 [2024-11-28 02:24:10.577320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:37.072 [2024-11-28 02:24:10.577473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:37.072 [2024-11-28 02:24:10.577483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:37.072 [2024-11-28 02:24:10.577633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.072 NewBaseBdev 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.072 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.072 [ 00:08:37.072 { 00:08:37.072 "name": "NewBaseBdev", 00:08:37.072 "aliases": [ 00:08:37.072 "66ad8c8c-a631-4420-8b98-744b2c84e7c7" 00:08:37.072 ], 00:08:37.072 "product_name": "Malloc disk", 00:08:37.072 "block_size": 512, 00:08:37.072 "num_blocks": 65536, 00:08:37.072 "uuid": "66ad8c8c-a631-4420-8b98-744b2c84e7c7", 00:08:37.072 "assigned_rate_limits": { 00:08:37.072 "rw_ios_per_sec": 0, 00:08:37.072 "rw_mbytes_per_sec": 0, 00:08:37.072 "r_mbytes_per_sec": 0, 00:08:37.072 "w_mbytes_per_sec": 0 00:08:37.072 }, 00:08:37.072 "claimed": true, 00:08:37.072 "claim_type": "exclusive_write", 00:08:37.072 "zoned": false, 00:08:37.072 "supported_io_types": { 00:08:37.072 "read": true, 00:08:37.072 "write": true, 00:08:37.072 "unmap": true, 00:08:37.072 "flush": true, 00:08:37.072 "reset": true, 00:08:37.073 "nvme_admin": false, 00:08:37.073 "nvme_io": false, 00:08:37.073 "nvme_io_md": false, 00:08:37.073 "write_zeroes": true, 00:08:37.073 "zcopy": true, 00:08:37.073 "get_zone_info": false, 00:08:37.073 "zone_management": false, 00:08:37.073 "zone_append": false, 00:08:37.073 "compare": false, 00:08:37.073 "compare_and_write": false, 00:08:37.073 "abort": true, 00:08:37.073 "seek_hole": false, 00:08:37.073 "seek_data": false, 00:08:37.073 "copy": true, 00:08:37.073 "nvme_iov_md": false 00:08:37.073 }, 00:08:37.073 "memory_domains": [ 00:08:37.073 { 00:08:37.073 "dma_device_id": "system", 00:08:37.073 "dma_device_type": 1 00:08:37.073 }, 00:08:37.073 { 00:08:37.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.073 "dma_device_type": 2 00:08:37.073 } 00:08:37.073 ], 00:08:37.073 "driver_specific": {} 00:08:37.073 } 00:08:37.073 ] 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.073 "name": "Existed_Raid", 00:08:37.073 "uuid": "4dc3551b-435b-4494-90f0-60ec4deb54e2", 00:08:37.073 "strip_size_kb": 64, 00:08:37.073 "state": "online", 00:08:37.073 "raid_level": "concat", 00:08:37.073 "superblock": true, 00:08:37.073 "num_base_bdevs": 3, 00:08:37.073 "num_base_bdevs_discovered": 3, 00:08:37.073 "num_base_bdevs_operational": 3, 00:08:37.073 "base_bdevs_list": [ 00:08:37.073 { 00:08:37.073 "name": "NewBaseBdev", 00:08:37.073 "uuid": "66ad8c8c-a631-4420-8b98-744b2c84e7c7", 00:08:37.073 "is_configured": true, 00:08:37.073 "data_offset": 2048, 00:08:37.073 "data_size": 63488 00:08:37.073 }, 00:08:37.073 { 00:08:37.073 "name": "BaseBdev2", 00:08:37.073 "uuid": "20db17af-f1f3-4e85-a174-8ef1a740b066", 00:08:37.073 "is_configured": true, 00:08:37.073 "data_offset": 2048, 00:08:37.073 "data_size": 63488 00:08:37.073 }, 00:08:37.073 { 00:08:37.073 "name": "BaseBdev3", 00:08:37.073 "uuid": "2c12397b-2fe8-4e7f-9271-065d67495c93", 00:08:37.073 "is_configured": true, 00:08:37.073 "data_offset": 2048, 00:08:37.073 "data_size": 63488 00:08:37.073 } 00:08:37.073 ] 00:08:37.073 }' 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.073 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.351 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:37.351 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:37.351 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.351 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.351 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.351 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.351 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:37.351 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.351 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.351 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.351 [2024-11-28 02:24:10.988418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.351 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.351 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.351 "name": "Existed_Raid", 00:08:37.351 "aliases": [ 00:08:37.351 "4dc3551b-435b-4494-90f0-60ec4deb54e2" 00:08:37.351 ], 00:08:37.351 "product_name": "Raid Volume", 00:08:37.351 "block_size": 512, 00:08:37.351 "num_blocks": 190464, 00:08:37.351 "uuid": "4dc3551b-435b-4494-90f0-60ec4deb54e2", 00:08:37.351 "assigned_rate_limits": { 00:08:37.351 "rw_ios_per_sec": 0, 00:08:37.351 "rw_mbytes_per_sec": 0, 00:08:37.351 "r_mbytes_per_sec": 0, 00:08:37.351 "w_mbytes_per_sec": 0 00:08:37.351 }, 00:08:37.351 "claimed": false, 00:08:37.351 "zoned": false, 00:08:37.351 "supported_io_types": { 00:08:37.351 "read": true, 00:08:37.351 "write": true, 00:08:37.351 "unmap": true, 00:08:37.351 "flush": true, 00:08:37.351 "reset": true, 00:08:37.351 "nvme_admin": false, 00:08:37.351 "nvme_io": false, 00:08:37.351 "nvme_io_md": false, 00:08:37.351 "write_zeroes": true, 00:08:37.351 "zcopy": false, 00:08:37.351 "get_zone_info": false, 00:08:37.351 "zone_management": false, 00:08:37.351 "zone_append": false, 00:08:37.351 "compare": false, 00:08:37.351 "compare_and_write": false, 00:08:37.351 "abort": false, 00:08:37.351 "seek_hole": false, 00:08:37.351 "seek_data": false, 00:08:37.351 "copy": false, 00:08:37.351 "nvme_iov_md": false 00:08:37.351 }, 00:08:37.351 "memory_domains": [ 00:08:37.351 { 00:08:37.351 "dma_device_id": "system", 00:08:37.351 "dma_device_type": 1 00:08:37.351 }, 00:08:37.351 { 00:08:37.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.351 "dma_device_type": 2 00:08:37.351 }, 00:08:37.351 { 00:08:37.351 "dma_device_id": "system", 00:08:37.351 "dma_device_type": 1 00:08:37.351 }, 00:08:37.351 { 00:08:37.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.351 "dma_device_type": 2 00:08:37.351 }, 00:08:37.351 { 00:08:37.351 "dma_device_id": "system", 00:08:37.351 "dma_device_type": 1 00:08:37.351 }, 00:08:37.351 { 00:08:37.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.351 "dma_device_type": 2 00:08:37.351 } 00:08:37.351 ], 00:08:37.351 "driver_specific": { 00:08:37.351 "raid": { 00:08:37.351 "uuid": "4dc3551b-435b-4494-90f0-60ec4deb54e2", 00:08:37.351 "strip_size_kb": 64, 00:08:37.351 "state": "online", 00:08:37.351 "raid_level": "concat", 00:08:37.351 "superblock": true, 00:08:37.351 "num_base_bdevs": 3, 00:08:37.351 "num_base_bdevs_discovered": 3, 00:08:37.351 "num_base_bdevs_operational": 3, 00:08:37.351 "base_bdevs_list": [ 00:08:37.351 { 00:08:37.351 "name": "NewBaseBdev", 00:08:37.351 "uuid": "66ad8c8c-a631-4420-8b98-744b2c84e7c7", 00:08:37.351 "is_configured": true, 00:08:37.351 "data_offset": 2048, 00:08:37.351 "data_size": 63488 00:08:37.351 }, 00:08:37.351 { 00:08:37.351 "name": "BaseBdev2", 00:08:37.351 "uuid": "20db17af-f1f3-4e85-a174-8ef1a740b066", 00:08:37.351 "is_configured": true, 00:08:37.351 "data_offset": 2048, 00:08:37.351 "data_size": 63488 00:08:37.351 }, 00:08:37.351 { 00:08:37.351 "name": "BaseBdev3", 00:08:37.351 "uuid": "2c12397b-2fe8-4e7f-9271-065d67495c93", 00:08:37.351 "is_configured": true, 00:08:37.351 "data_offset": 2048, 00:08:37.351 "data_size": 63488 00:08:37.351 } 00:08:37.351 ] 00:08:37.351 } 00:08:37.351 } 00:08:37.351 }' 00:08:37.351 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.612 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:37.612 BaseBdev2 00:08:37.613 BaseBdev3' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.613 [2024-11-28 02:24:11.267624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.613 [2024-11-28 02:24:11.267657] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.613 [2024-11-28 02:24:11.267733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.613 [2024-11-28 02:24:11.267795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.613 [2024-11-28 02:24:11.267811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66036 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66036 ']' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66036 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.613 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66036 00:08:37.872 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.872 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.872 killing process with pid 66036 00:08:37.872 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66036' 00:08:37.872 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66036 00:08:37.872 [2024-11-28 02:24:11.301303] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.872 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66036 00:08:38.132 [2024-11-28 02:24:11.596688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.071 02:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:39.071 00:08:39.071 real 0m10.648s 00:08:39.071 user 0m17.053s 00:08:39.071 sys 0m1.780s 00:08:39.071 02:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.071 02:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.071 ************************************ 00:08:39.071 END TEST raid_state_function_test_sb 00:08:39.071 ************************************ 00:08:39.071 02:24:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:39.071 02:24:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:39.071 02:24:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.071 02:24:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.330 ************************************ 00:08:39.330 START TEST raid_superblock_test 00:08:39.330 ************************************ 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66656 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66656 00:08:39.330 02:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66656 ']' 00:08:39.331 02:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.331 02:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.331 02:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.331 02:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.331 02:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.331 [2024-11-28 02:24:12.848871] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:39.331 [2024-11-28 02:24:12.848992] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66656 ] 00:08:39.590 [2024-11-28 02:24:13.023389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.590 [2024-11-28 02:24:13.137854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.848 [2024-11-28 02:24:13.340867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.848 [2024-11-28 02:24:13.340907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.106 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.106 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:40.106 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:40.106 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.106 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:40.106 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:40.106 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.107 malloc1 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.107 [2024-11-28 02:24:13.726598] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:40.107 [2024-11-28 02:24:13.726653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.107 [2024-11-28 02:24:13.726674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:40.107 [2024-11-28 02:24:13.726683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.107 [2024-11-28 02:24:13.728712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.107 [2024-11-28 02:24:13.728746] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:40.107 pt1 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.107 malloc2 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.107 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.366 [2024-11-28 02:24:13.785986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:40.366 [2024-11-28 02:24:13.786043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.366 [2024-11-28 02:24:13.786070] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:40.366 [2024-11-28 02:24:13.786079] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.366 [2024-11-28 02:24:13.788202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.366 [2024-11-28 02:24:13.788237] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:40.366 pt2 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.366 malloc3 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.366 [2024-11-28 02:24:13.857211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:40.366 [2024-11-28 02:24:13.857266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.366 [2024-11-28 02:24:13.857289] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:40.366 [2024-11-28 02:24:13.857298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.366 [2024-11-28 02:24:13.859349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.366 [2024-11-28 02:24:13.859382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:40.366 pt3 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.366 [2024-11-28 02:24:13.869241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:40.366 [2024-11-28 02:24:13.871107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:40.366 [2024-11-28 02:24:13.871176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:40.366 [2024-11-28 02:24:13.871333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:40.366 [2024-11-28 02:24:13.871353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.366 [2024-11-28 02:24:13.871626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:40.366 [2024-11-28 02:24:13.871794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:40.366 [2024-11-28 02:24:13.871810] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:40.366 [2024-11-28 02:24:13.871979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.366 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.366 "name": "raid_bdev1", 00:08:40.366 "uuid": "e6c41653-aa61-4d9c-b71b-7c28ab76fe18", 00:08:40.366 "strip_size_kb": 64, 00:08:40.366 "state": "online", 00:08:40.366 "raid_level": "concat", 00:08:40.366 "superblock": true, 00:08:40.366 "num_base_bdevs": 3, 00:08:40.366 "num_base_bdevs_discovered": 3, 00:08:40.367 "num_base_bdevs_operational": 3, 00:08:40.367 "base_bdevs_list": [ 00:08:40.367 { 00:08:40.367 "name": "pt1", 00:08:40.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.367 "is_configured": true, 00:08:40.367 "data_offset": 2048, 00:08:40.367 "data_size": 63488 00:08:40.367 }, 00:08:40.367 { 00:08:40.367 "name": "pt2", 00:08:40.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.367 "is_configured": true, 00:08:40.367 "data_offset": 2048, 00:08:40.367 "data_size": 63488 00:08:40.367 }, 00:08:40.367 { 00:08:40.367 "name": "pt3", 00:08:40.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:40.367 "is_configured": true, 00:08:40.367 "data_offset": 2048, 00:08:40.367 "data_size": 63488 00:08:40.367 } 00:08:40.367 ] 00:08:40.367 }' 00:08:40.367 02:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.367 02:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.933 [2024-11-28 02:24:14.344744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.933 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.933 "name": "raid_bdev1", 00:08:40.933 "aliases": [ 00:08:40.933 "e6c41653-aa61-4d9c-b71b-7c28ab76fe18" 00:08:40.933 ], 00:08:40.933 "product_name": "Raid Volume", 00:08:40.933 "block_size": 512, 00:08:40.933 "num_blocks": 190464, 00:08:40.933 "uuid": "e6c41653-aa61-4d9c-b71b-7c28ab76fe18", 00:08:40.933 "assigned_rate_limits": { 00:08:40.933 "rw_ios_per_sec": 0, 00:08:40.933 "rw_mbytes_per_sec": 0, 00:08:40.933 "r_mbytes_per_sec": 0, 00:08:40.933 "w_mbytes_per_sec": 0 00:08:40.933 }, 00:08:40.933 "claimed": false, 00:08:40.934 "zoned": false, 00:08:40.934 "supported_io_types": { 00:08:40.934 "read": true, 00:08:40.934 "write": true, 00:08:40.934 "unmap": true, 00:08:40.934 "flush": true, 00:08:40.934 "reset": true, 00:08:40.934 "nvme_admin": false, 00:08:40.934 "nvme_io": false, 00:08:40.934 "nvme_io_md": false, 00:08:40.934 "write_zeroes": true, 00:08:40.934 "zcopy": false, 00:08:40.934 "get_zone_info": false, 00:08:40.934 "zone_management": false, 00:08:40.934 "zone_append": false, 00:08:40.934 "compare": false, 00:08:40.934 "compare_and_write": false, 00:08:40.934 "abort": false, 00:08:40.934 "seek_hole": false, 00:08:40.934 "seek_data": false, 00:08:40.934 "copy": false, 00:08:40.934 "nvme_iov_md": false 00:08:40.934 }, 00:08:40.934 "memory_domains": [ 00:08:40.934 { 00:08:40.934 "dma_device_id": "system", 00:08:40.934 "dma_device_type": 1 00:08:40.934 }, 00:08:40.934 { 00:08:40.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.934 "dma_device_type": 2 00:08:40.934 }, 00:08:40.934 { 00:08:40.934 "dma_device_id": "system", 00:08:40.934 "dma_device_type": 1 00:08:40.934 }, 00:08:40.934 { 00:08:40.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.934 "dma_device_type": 2 00:08:40.934 }, 00:08:40.934 { 00:08:40.934 "dma_device_id": "system", 00:08:40.934 "dma_device_type": 1 00:08:40.934 }, 00:08:40.934 { 00:08:40.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.934 "dma_device_type": 2 00:08:40.934 } 00:08:40.934 ], 00:08:40.934 "driver_specific": { 00:08:40.934 "raid": { 00:08:40.934 "uuid": "e6c41653-aa61-4d9c-b71b-7c28ab76fe18", 00:08:40.934 "strip_size_kb": 64, 00:08:40.934 "state": "online", 00:08:40.934 "raid_level": "concat", 00:08:40.934 "superblock": true, 00:08:40.934 "num_base_bdevs": 3, 00:08:40.934 "num_base_bdevs_discovered": 3, 00:08:40.934 "num_base_bdevs_operational": 3, 00:08:40.934 "base_bdevs_list": [ 00:08:40.934 { 00:08:40.934 "name": "pt1", 00:08:40.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.934 "is_configured": true, 00:08:40.934 "data_offset": 2048, 00:08:40.934 "data_size": 63488 00:08:40.934 }, 00:08:40.934 { 00:08:40.934 "name": "pt2", 00:08:40.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.934 "is_configured": true, 00:08:40.934 "data_offset": 2048, 00:08:40.934 "data_size": 63488 00:08:40.934 }, 00:08:40.934 { 00:08:40.934 "name": "pt3", 00:08:40.934 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:40.934 "is_configured": true, 00:08:40.934 "data_offset": 2048, 00:08:40.934 "data_size": 63488 00:08:40.934 } 00:08:40.934 ] 00:08:40.934 } 00:08:40.934 } 00:08:40.934 }' 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:40.934 pt2 00:08:40.934 pt3' 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.934 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:41.193 [2024-11-28 02:24:14.624299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e6c41653-aa61-4d9c-b71b-7c28ab76fe18 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e6c41653-aa61-4d9c-b71b-7c28ab76fe18 ']' 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.193 [2024-11-28 02:24:14.671905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.193 [2024-11-28 02:24:14.671943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.193 [2024-11-28 02:24:14.672019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.193 [2024-11-28 02:24:14.672082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.193 [2024-11-28 02:24:14.672091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.193 [2024-11-28 02:24:14.827721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:41.193 [2024-11-28 02:24:14.829574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:41.193 [2024-11-28 02:24:14.829632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:41.193 [2024-11-28 02:24:14.829683] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:41.193 [2024-11-28 02:24:14.829732] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:41.193 [2024-11-28 02:24:14.829750] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:41.193 [2024-11-28 02:24:14.829766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.193 [2024-11-28 02:24:14.829776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:41.193 request: 00:08:41.193 { 00:08:41.193 "name": "raid_bdev1", 00:08:41.193 "raid_level": "concat", 00:08:41.193 "base_bdevs": [ 00:08:41.193 "malloc1", 00:08:41.193 "malloc2", 00:08:41.193 "malloc3" 00:08:41.193 ], 00:08:41.193 "strip_size_kb": 64, 00:08:41.193 "superblock": false, 00:08:41.193 "method": "bdev_raid_create", 00:08:41.193 "req_id": 1 00:08:41.193 } 00:08:41.193 Got JSON-RPC error response 00:08:41.193 response: 00:08:41.193 { 00:08:41.193 "code": -17, 00:08:41.193 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:41.193 } 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.193 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.194 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.194 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.452 [2024-11-28 02:24:14.883535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.452 [2024-11-28 02:24:14.883577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.452 [2024-11-28 02:24:14.883595] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:41.452 [2024-11-28 02:24:14.883604] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.452 [2024-11-28 02:24:14.885706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.452 [2024-11-28 02:24:14.885739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.452 [2024-11-28 02:24:14.885817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:41.452 [2024-11-28 02:24:14.885871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.452 pt1 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.452 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.453 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.453 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.453 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.453 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.453 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.453 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.453 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.453 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.453 "name": "raid_bdev1", 00:08:41.453 "uuid": "e6c41653-aa61-4d9c-b71b-7c28ab76fe18", 00:08:41.453 "strip_size_kb": 64, 00:08:41.453 "state": "configuring", 00:08:41.453 "raid_level": "concat", 00:08:41.453 "superblock": true, 00:08:41.453 "num_base_bdevs": 3, 00:08:41.453 "num_base_bdevs_discovered": 1, 00:08:41.453 "num_base_bdevs_operational": 3, 00:08:41.453 "base_bdevs_list": [ 00:08:41.453 { 00:08:41.453 "name": "pt1", 00:08:41.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.453 "is_configured": true, 00:08:41.453 "data_offset": 2048, 00:08:41.453 "data_size": 63488 00:08:41.453 }, 00:08:41.453 { 00:08:41.453 "name": null, 00:08:41.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.453 "is_configured": false, 00:08:41.453 "data_offset": 2048, 00:08:41.453 "data_size": 63488 00:08:41.453 }, 00:08:41.453 { 00:08:41.453 "name": null, 00:08:41.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.453 "is_configured": false, 00:08:41.453 "data_offset": 2048, 00:08:41.453 "data_size": 63488 00:08:41.453 } 00:08:41.453 ] 00:08:41.453 }' 00:08:41.453 02:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.453 02:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.711 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:41.711 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.711 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.711 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.711 [2024-11-28 02:24:15.298981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.711 [2024-11-28 02:24:15.299083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.711 [2024-11-28 02:24:15.299119] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:41.711 [2024-11-28 02:24:15.299130] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.711 [2024-11-28 02:24:15.299710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.711 [2024-11-28 02:24:15.299729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.711 [2024-11-28 02:24:15.299841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:41.711 [2024-11-28 02:24:15.299873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.711 pt2 00:08:41.711 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.711 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:41.711 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.711 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.711 [2024-11-28 02:24:15.310903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:41.711 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.712 "name": "raid_bdev1", 00:08:41.712 "uuid": "e6c41653-aa61-4d9c-b71b-7c28ab76fe18", 00:08:41.712 "strip_size_kb": 64, 00:08:41.712 "state": "configuring", 00:08:41.712 "raid_level": "concat", 00:08:41.712 "superblock": true, 00:08:41.712 "num_base_bdevs": 3, 00:08:41.712 "num_base_bdevs_discovered": 1, 00:08:41.712 "num_base_bdevs_operational": 3, 00:08:41.712 "base_bdevs_list": [ 00:08:41.712 { 00:08:41.712 "name": "pt1", 00:08:41.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.712 "is_configured": true, 00:08:41.712 "data_offset": 2048, 00:08:41.712 "data_size": 63488 00:08:41.712 }, 00:08:41.712 { 00:08:41.712 "name": null, 00:08:41.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.712 "is_configured": false, 00:08:41.712 "data_offset": 0, 00:08:41.712 "data_size": 63488 00:08:41.712 }, 00:08:41.712 { 00:08:41.712 "name": null, 00:08:41.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.712 "is_configured": false, 00:08:41.712 "data_offset": 2048, 00:08:41.712 "data_size": 63488 00:08:41.712 } 00:08:41.712 ] 00:08:41.712 }' 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.712 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.279 [2024-11-28 02:24:15.754143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.279 [2024-11-28 02:24:15.754235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.279 [2024-11-28 02:24:15.754255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:42.279 [2024-11-28 02:24:15.754267] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.279 [2024-11-28 02:24:15.754811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.279 [2024-11-28 02:24:15.754834] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.279 [2024-11-28 02:24:15.754945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:42.279 [2024-11-28 02:24:15.754974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.279 pt2 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.279 [2024-11-28 02:24:15.766071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:42.279 [2024-11-28 02:24:15.766122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.279 [2024-11-28 02:24:15.766137] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:42.279 [2024-11-28 02:24:15.766147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.279 [2024-11-28 02:24:15.766552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.279 [2024-11-28 02:24:15.766575] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:42.279 [2024-11-28 02:24:15.766637] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:42.279 [2024-11-28 02:24:15.766660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:42.279 [2024-11-28 02:24:15.766803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:42.279 [2024-11-28 02:24:15.766815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:42.279 [2024-11-28 02:24:15.767093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:42.279 [2024-11-28 02:24:15.767264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:42.279 [2024-11-28 02:24:15.767279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:42.279 [2024-11-28 02:24:15.767443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.279 pt3 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.279 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.279 "name": "raid_bdev1", 00:08:42.279 "uuid": "e6c41653-aa61-4d9c-b71b-7c28ab76fe18", 00:08:42.279 "strip_size_kb": 64, 00:08:42.279 "state": "online", 00:08:42.279 "raid_level": "concat", 00:08:42.279 "superblock": true, 00:08:42.279 "num_base_bdevs": 3, 00:08:42.279 "num_base_bdevs_discovered": 3, 00:08:42.279 "num_base_bdevs_operational": 3, 00:08:42.279 "base_bdevs_list": [ 00:08:42.279 { 00:08:42.279 "name": "pt1", 00:08:42.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.279 "is_configured": true, 00:08:42.279 "data_offset": 2048, 00:08:42.279 "data_size": 63488 00:08:42.279 }, 00:08:42.279 { 00:08:42.279 "name": "pt2", 00:08:42.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.279 "is_configured": true, 00:08:42.279 "data_offset": 2048, 00:08:42.279 "data_size": 63488 00:08:42.279 }, 00:08:42.279 { 00:08:42.279 "name": "pt3", 00:08:42.279 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.279 "is_configured": true, 00:08:42.279 "data_offset": 2048, 00:08:42.279 "data_size": 63488 00:08:42.279 } 00:08:42.279 ] 00:08:42.280 }' 00:08:42.280 02:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.280 02:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.848 [2024-11-28 02:24:16.229704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.848 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.848 "name": "raid_bdev1", 00:08:42.848 "aliases": [ 00:08:42.848 "e6c41653-aa61-4d9c-b71b-7c28ab76fe18" 00:08:42.848 ], 00:08:42.849 "product_name": "Raid Volume", 00:08:42.849 "block_size": 512, 00:08:42.849 "num_blocks": 190464, 00:08:42.849 "uuid": "e6c41653-aa61-4d9c-b71b-7c28ab76fe18", 00:08:42.849 "assigned_rate_limits": { 00:08:42.849 "rw_ios_per_sec": 0, 00:08:42.849 "rw_mbytes_per_sec": 0, 00:08:42.849 "r_mbytes_per_sec": 0, 00:08:42.849 "w_mbytes_per_sec": 0 00:08:42.849 }, 00:08:42.849 "claimed": false, 00:08:42.849 "zoned": false, 00:08:42.849 "supported_io_types": { 00:08:42.849 "read": true, 00:08:42.849 "write": true, 00:08:42.849 "unmap": true, 00:08:42.849 "flush": true, 00:08:42.849 "reset": true, 00:08:42.849 "nvme_admin": false, 00:08:42.849 "nvme_io": false, 00:08:42.849 "nvme_io_md": false, 00:08:42.849 "write_zeroes": true, 00:08:42.849 "zcopy": false, 00:08:42.849 "get_zone_info": false, 00:08:42.849 "zone_management": false, 00:08:42.849 "zone_append": false, 00:08:42.849 "compare": false, 00:08:42.849 "compare_and_write": false, 00:08:42.849 "abort": false, 00:08:42.849 "seek_hole": false, 00:08:42.849 "seek_data": false, 00:08:42.849 "copy": false, 00:08:42.849 "nvme_iov_md": false 00:08:42.849 }, 00:08:42.849 "memory_domains": [ 00:08:42.849 { 00:08:42.849 "dma_device_id": "system", 00:08:42.849 "dma_device_type": 1 00:08:42.849 }, 00:08:42.849 { 00:08:42.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.849 "dma_device_type": 2 00:08:42.849 }, 00:08:42.849 { 00:08:42.849 "dma_device_id": "system", 00:08:42.849 "dma_device_type": 1 00:08:42.849 }, 00:08:42.849 { 00:08:42.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.849 "dma_device_type": 2 00:08:42.849 }, 00:08:42.849 { 00:08:42.849 "dma_device_id": "system", 00:08:42.849 "dma_device_type": 1 00:08:42.849 }, 00:08:42.849 { 00:08:42.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.849 "dma_device_type": 2 00:08:42.849 } 00:08:42.849 ], 00:08:42.849 "driver_specific": { 00:08:42.849 "raid": { 00:08:42.849 "uuid": "e6c41653-aa61-4d9c-b71b-7c28ab76fe18", 00:08:42.849 "strip_size_kb": 64, 00:08:42.849 "state": "online", 00:08:42.849 "raid_level": "concat", 00:08:42.849 "superblock": true, 00:08:42.849 "num_base_bdevs": 3, 00:08:42.849 "num_base_bdevs_discovered": 3, 00:08:42.849 "num_base_bdevs_operational": 3, 00:08:42.849 "base_bdevs_list": [ 00:08:42.849 { 00:08:42.849 "name": "pt1", 00:08:42.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.849 "is_configured": true, 00:08:42.849 "data_offset": 2048, 00:08:42.849 "data_size": 63488 00:08:42.849 }, 00:08:42.849 { 00:08:42.849 "name": "pt2", 00:08:42.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.849 "is_configured": true, 00:08:42.849 "data_offset": 2048, 00:08:42.849 "data_size": 63488 00:08:42.849 }, 00:08:42.849 { 00:08:42.849 "name": "pt3", 00:08:42.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.849 "is_configured": true, 00:08:42.849 "data_offset": 2048, 00:08:42.849 "data_size": 63488 00:08:42.849 } 00:08:42.849 ] 00:08:42.849 } 00:08:42.849 } 00:08:42.849 }' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:42.849 pt2 00:08:42.849 pt3' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.849 [2024-11-28 02:24:16.497274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e6c41653-aa61-4d9c-b71b-7c28ab76fe18 '!=' e6c41653-aa61-4d9c-b71b-7c28ab76fe18 ']' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66656 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66656 ']' 00:08:42.849 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66656 00:08:43.108 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:43.108 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.108 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66656 00:08:43.108 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.108 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.108 killing process with pid 66656 00:08:43.108 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66656' 00:08:43.108 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66656 00:08:43.108 [2024-11-28 02:24:16.568104] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.108 [2024-11-28 02:24:16.568230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.108 02:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66656 00:08:43.108 [2024-11-28 02:24:16.568310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.108 [2024-11-28 02:24:16.568325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:43.368 [2024-11-28 02:24:16.890555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.750 02:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:44.750 00:08:44.750 real 0m5.358s 00:08:44.750 user 0m7.634s 00:08:44.750 sys 0m0.852s 00:08:44.750 02:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.750 02:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.750 ************************************ 00:08:44.750 END TEST raid_superblock_test 00:08:44.750 ************************************ 00:08:44.750 02:24:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:44.750 02:24:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:44.750 02:24:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.750 02:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.750 ************************************ 00:08:44.750 START TEST raid_read_error_test 00:08:44.750 ************************************ 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:44.750 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:44.751 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yMHLmbhJAA 00:08:44.751 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66916 00:08:44.751 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:44.751 02:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66916 00:08:44.751 02:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66916 ']' 00:08:44.751 02:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.751 02:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.751 02:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.751 02:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.751 02:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.751 [2024-11-28 02:24:18.297694] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:44.751 [2024-11-28 02:24:18.297826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66916 ] 00:08:45.011 [2024-11-28 02:24:18.475089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.011 [2024-11-28 02:24:18.615607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.270 [2024-11-28 02:24:18.853210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.270 [2024-11-28 02:24:18.853259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 BaseBdev1_malloc 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 true 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.530 [2024-11-28 02:24:19.186164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:45.530 [2024-11-28 02:24:19.186229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.530 [2024-11-28 02:24:19.186249] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:45.530 [2024-11-28 02:24:19.186261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.530 [2024-11-28 02:24:19.188663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.530 [2024-11-28 02:24:19.188710] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:45.530 BaseBdev1 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.530 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 BaseBdev2_malloc 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 true 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 [2024-11-28 02:24:19.259449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:45.797 [2024-11-28 02:24:19.259515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.797 [2024-11-28 02:24:19.259532] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:45.797 [2024-11-28 02:24:19.259544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.797 [2024-11-28 02:24:19.261943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.797 [2024-11-28 02:24:19.261976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:45.797 BaseBdev2 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 BaseBdev3_malloc 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 true 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 [2024-11-28 02:24:19.361907] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:45.797 [2024-11-28 02:24:19.361963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.797 [2024-11-28 02:24:19.361979] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:45.797 [2024-11-28 02:24:19.361989] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.797 [2024-11-28 02:24:19.364027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.797 [2024-11-28 02:24:19.364060] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:45.797 BaseBdev3 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 [2024-11-28 02:24:19.373971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.797 [2024-11-28 02:24:19.375904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.797 [2024-11-28 02:24:19.376026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.797 [2024-11-28 02:24:19.376279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:45.797 [2024-11-28 02:24:19.376307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:45.797 [2024-11-28 02:24:19.376604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:45.797 [2024-11-28 02:24:19.376811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:45.797 [2024-11-28 02:24:19.376841] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:45.797 [2024-11-28 02:24:19.377033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.797 "name": "raid_bdev1", 00:08:45.797 "uuid": "aff98556-1e27-4eed-9638-9999adf0fbd0", 00:08:45.797 "strip_size_kb": 64, 00:08:45.797 "state": "online", 00:08:45.797 "raid_level": "concat", 00:08:45.797 "superblock": true, 00:08:45.797 "num_base_bdevs": 3, 00:08:45.797 "num_base_bdevs_discovered": 3, 00:08:45.797 "num_base_bdevs_operational": 3, 00:08:45.797 "base_bdevs_list": [ 00:08:45.797 { 00:08:45.797 "name": "BaseBdev1", 00:08:45.797 "uuid": "d07c59f0-6d0c-503d-a157-b3dd066ecff8", 00:08:45.797 "is_configured": true, 00:08:45.797 "data_offset": 2048, 00:08:45.797 "data_size": 63488 00:08:45.797 }, 00:08:45.797 { 00:08:45.797 "name": "BaseBdev2", 00:08:45.797 "uuid": "fc5b222a-a922-5972-a48d-7851f629928a", 00:08:45.797 "is_configured": true, 00:08:45.797 "data_offset": 2048, 00:08:45.797 "data_size": 63488 00:08:45.797 }, 00:08:45.797 { 00:08:45.797 "name": "BaseBdev3", 00:08:45.797 "uuid": "0ae2a84a-33dc-54cf-ac22-499a7bc797ab", 00:08:45.797 "is_configured": true, 00:08:45.797 "data_offset": 2048, 00:08:45.797 "data_size": 63488 00:08:45.797 } 00:08:45.797 ] 00:08:45.797 }' 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.797 02:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.379 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:46.379 02:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:46.379 [2024-11-28 02:24:19.902425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.318 "name": "raid_bdev1", 00:08:47.318 "uuid": "aff98556-1e27-4eed-9638-9999adf0fbd0", 00:08:47.318 "strip_size_kb": 64, 00:08:47.318 "state": "online", 00:08:47.318 "raid_level": "concat", 00:08:47.318 "superblock": true, 00:08:47.318 "num_base_bdevs": 3, 00:08:47.318 "num_base_bdevs_discovered": 3, 00:08:47.318 "num_base_bdevs_operational": 3, 00:08:47.318 "base_bdevs_list": [ 00:08:47.318 { 00:08:47.318 "name": "BaseBdev1", 00:08:47.318 "uuid": "d07c59f0-6d0c-503d-a157-b3dd066ecff8", 00:08:47.318 "is_configured": true, 00:08:47.318 "data_offset": 2048, 00:08:47.318 "data_size": 63488 00:08:47.318 }, 00:08:47.318 { 00:08:47.318 "name": "BaseBdev2", 00:08:47.318 "uuid": "fc5b222a-a922-5972-a48d-7851f629928a", 00:08:47.318 "is_configured": true, 00:08:47.318 "data_offset": 2048, 00:08:47.318 "data_size": 63488 00:08:47.318 }, 00:08:47.318 { 00:08:47.318 "name": "BaseBdev3", 00:08:47.318 "uuid": "0ae2a84a-33dc-54cf-ac22-499a7bc797ab", 00:08:47.318 "is_configured": true, 00:08:47.318 "data_offset": 2048, 00:08:47.318 "data_size": 63488 00:08:47.318 } 00:08:47.318 ] 00:08:47.318 }' 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.318 02:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.578 02:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.578 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.578 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.578 [2024-11-28 02:24:21.252623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.578 [2024-11-28 02:24:21.252658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.578 [2024-11-28 02:24:21.255354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.578 [2024-11-28 02:24:21.255401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.578 [2024-11-28 02:24:21.255446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.578 [2024-11-28 02:24:21.255459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:47.838 { 00:08:47.838 "results": [ 00:08:47.838 { 00:08:47.838 "job": "raid_bdev1", 00:08:47.838 "core_mask": "0x1", 00:08:47.838 "workload": "randrw", 00:08:47.838 "percentage": 50, 00:08:47.838 "status": "finished", 00:08:47.838 "queue_depth": 1, 00:08:47.838 "io_size": 131072, 00:08:47.838 "runtime": 1.351095, 00:08:47.838 "iops": 15997.394705775685, 00:08:47.838 "mibps": 1999.6743382219606, 00:08:47.838 "io_failed": 1, 00:08:47.838 "io_timeout": 0, 00:08:47.838 "avg_latency_us": 86.51341371985126, 00:08:47.838 "min_latency_us": 24.929257641921396, 00:08:47.838 "max_latency_us": 1380.8349344978167 00:08:47.838 } 00:08:47.838 ], 00:08:47.838 "core_count": 1 00:08:47.838 } 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66916 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66916 ']' 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66916 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66916 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.838 killing process with pid 66916 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66916' 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66916 00:08:47.838 [2024-11-28 02:24:21.302222] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.838 02:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66916 00:08:48.099 [2024-11-28 02:24:21.554633] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.480 02:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yMHLmbhJAA 00:08:49.480 02:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:49.480 02:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:49.480 02:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:49.480 02:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:49.480 02:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.480 02:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.480 02:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:49.480 00:08:49.480 real 0m4.615s 00:08:49.480 user 0m5.353s 00:08:49.480 sys 0m0.630s 00:08:49.480 02:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.480 02:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.480 ************************************ 00:08:49.480 END TEST raid_read_error_test 00:08:49.480 ************************************ 00:08:49.480 02:24:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:49.480 02:24:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:49.480 02:24:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.480 02:24:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.480 ************************************ 00:08:49.480 START TEST raid_write_error_test 00:08:49.480 ************************************ 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7TPmKAqY2P 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67056 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67056 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67056 ']' 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.480 02:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.480 [2024-11-28 02:24:22.981088] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:49.480 [2024-11-28 02:24:22.981526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67056 ] 00:08:49.480 [2024-11-28 02:24:23.156626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.740 [2024-11-28 02:24:23.271166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.999 [2024-11-28 02:24:23.471823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.999 [2024-11-28 02:24:23.471878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.259 BaseBdev1_malloc 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.259 true 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.259 [2024-11-28 02:24:23.870254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:50.259 [2024-11-28 02:24:23.870326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.259 [2024-11-28 02:24:23.870346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:50.259 [2024-11-28 02:24:23.870357] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.259 [2024-11-28 02:24:23.872427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.259 [2024-11-28 02:24:23.872473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:50.259 BaseBdev1 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.259 BaseBdev2_malloc 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.259 true 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.259 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.519 [2024-11-28 02:24:23.937469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:50.520 [2024-11-28 02:24:23.937526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.520 [2024-11-28 02:24:23.937542] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:50.520 [2024-11-28 02:24:23.937552] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.520 [2024-11-28 02:24:23.939645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.520 [2024-11-28 02:24:23.939699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:50.520 BaseBdev2 00:08:50.520 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.520 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.520 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:50.520 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.520 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.520 BaseBdev3_malloc 00:08:50.520 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.520 02:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:50.520 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.520 02:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.520 true 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.520 [2024-11-28 02:24:24.014606] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:50.520 [2024-11-28 02:24:24.014663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.520 [2024-11-28 02:24:24.014681] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:50.520 [2024-11-28 02:24:24.014691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.520 [2024-11-28 02:24:24.016737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.520 [2024-11-28 02:24:24.016780] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:50.520 BaseBdev3 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.520 [2024-11-28 02:24:24.026662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.520 [2024-11-28 02:24:24.028465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.520 [2024-11-28 02:24:24.028547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.520 [2024-11-28 02:24:24.028752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:50.520 [2024-11-28 02:24:24.028773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:50.520 [2024-11-28 02:24:24.029067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:50.520 [2024-11-28 02:24:24.029260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:50.520 [2024-11-28 02:24:24.029284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:50.520 [2024-11-28 02:24:24.029446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.520 "name": "raid_bdev1", 00:08:50.520 "uuid": "36b56c41-2dce-42ba-aaf7-9e78b057f5b4", 00:08:50.520 "strip_size_kb": 64, 00:08:50.520 "state": "online", 00:08:50.520 "raid_level": "concat", 00:08:50.520 "superblock": true, 00:08:50.520 "num_base_bdevs": 3, 00:08:50.520 "num_base_bdevs_discovered": 3, 00:08:50.520 "num_base_bdevs_operational": 3, 00:08:50.520 "base_bdevs_list": [ 00:08:50.520 { 00:08:50.520 "name": "BaseBdev1", 00:08:50.520 "uuid": "0c766a73-e28e-59da-afb2-21a2d54e9514", 00:08:50.520 "is_configured": true, 00:08:50.520 "data_offset": 2048, 00:08:50.520 "data_size": 63488 00:08:50.520 }, 00:08:50.520 { 00:08:50.520 "name": "BaseBdev2", 00:08:50.520 "uuid": "7bc8a458-c26d-5623-9795-2bc0e01a898f", 00:08:50.520 "is_configured": true, 00:08:50.520 "data_offset": 2048, 00:08:50.520 "data_size": 63488 00:08:50.520 }, 00:08:50.520 { 00:08:50.520 "name": "BaseBdev3", 00:08:50.520 "uuid": "f8fd7a69-0083-54d6-8293-1bac353404c6", 00:08:50.520 "is_configured": true, 00:08:50.520 "data_offset": 2048, 00:08:50.520 "data_size": 63488 00:08:50.520 } 00:08:50.520 ] 00:08:50.520 }' 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.520 02:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.780 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:50.780 02:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:51.039 [2024-11-28 02:24:24.535061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.975 "name": "raid_bdev1", 00:08:51.975 "uuid": "36b56c41-2dce-42ba-aaf7-9e78b057f5b4", 00:08:51.975 "strip_size_kb": 64, 00:08:51.975 "state": "online", 00:08:51.975 "raid_level": "concat", 00:08:51.975 "superblock": true, 00:08:51.975 "num_base_bdevs": 3, 00:08:51.975 "num_base_bdevs_discovered": 3, 00:08:51.975 "num_base_bdevs_operational": 3, 00:08:51.975 "base_bdevs_list": [ 00:08:51.975 { 00:08:51.975 "name": "BaseBdev1", 00:08:51.975 "uuid": "0c766a73-e28e-59da-afb2-21a2d54e9514", 00:08:51.975 "is_configured": true, 00:08:51.975 "data_offset": 2048, 00:08:51.975 "data_size": 63488 00:08:51.975 }, 00:08:51.975 { 00:08:51.975 "name": "BaseBdev2", 00:08:51.975 "uuid": "7bc8a458-c26d-5623-9795-2bc0e01a898f", 00:08:51.975 "is_configured": true, 00:08:51.975 "data_offset": 2048, 00:08:51.975 "data_size": 63488 00:08:51.975 }, 00:08:51.975 { 00:08:51.975 "name": "BaseBdev3", 00:08:51.975 "uuid": "f8fd7a69-0083-54d6-8293-1bac353404c6", 00:08:51.975 "is_configured": true, 00:08:51.975 "data_offset": 2048, 00:08:51.975 "data_size": 63488 00:08:51.975 } 00:08:51.975 ] 00:08:51.975 }' 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.975 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.544 [2024-11-28 02:24:25.931292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.544 [2024-11-28 02:24:25.931329] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.544 [2024-11-28 02:24:25.934060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.544 [2024-11-28 02:24:25.934108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.544 [2024-11-28 02:24:25.934147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.544 [2024-11-28 02:24:25.934156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:52.544 { 00:08:52.544 "results": [ 00:08:52.544 { 00:08:52.544 "job": "raid_bdev1", 00:08:52.544 "core_mask": "0x1", 00:08:52.544 "workload": "randrw", 00:08:52.544 "percentage": 50, 00:08:52.544 "status": "finished", 00:08:52.544 "queue_depth": 1, 00:08:52.544 "io_size": 131072, 00:08:52.544 "runtime": 1.39725, 00:08:52.544 "iops": 15708.713544462336, 00:08:52.544 "mibps": 1963.589193057792, 00:08:52.544 "io_failed": 1, 00:08:52.544 "io_timeout": 0, 00:08:52.544 "avg_latency_us": 88.13051146412549, 00:08:52.544 "min_latency_us": 26.1589519650655, 00:08:52.544 "max_latency_us": 1373.6803493449781 00:08:52.544 } 00:08:52.544 ], 00:08:52.544 "core_count": 1 00:08:52.544 } 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67056 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67056 ']' 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67056 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67056 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.544 killing process with pid 67056 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67056' 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67056 00:08:52.544 [2024-11-28 02:24:25.976595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.544 02:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67056 00:08:52.544 [2024-11-28 02:24:26.201473] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.945 02:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7TPmKAqY2P 00:08:53.945 02:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:53.945 02:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:53.945 02:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:53.945 02:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:53.945 02:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.945 02:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:53.945 02:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:53.945 00:08:53.945 real 0m4.488s 00:08:53.945 user 0m5.304s 00:08:53.945 sys 0m0.586s 00:08:53.945 02:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.945 02:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.945 ************************************ 00:08:53.945 END TEST raid_write_error_test 00:08:53.945 ************************************ 00:08:53.945 02:24:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:53.945 02:24:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:53.945 02:24:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:53.945 02:24:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.945 02:24:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.945 ************************************ 00:08:53.945 START TEST raid_state_function_test 00:08:53.945 ************************************ 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67204 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67204' 00:08:53.945 Process raid pid: 67204 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67204 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67204 ']' 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.945 02:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.945 [2024-11-28 02:24:27.538232] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:53.945 [2024-11-28 02:24:27.538345] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.204 [2024-11-28 02:24:27.693205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.204 [2024-11-28 02:24:27.802673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.463 [2024-11-28 02:24:28.002105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.463 [2024-11-28 02:24:28.002154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.723 [2024-11-28 02:24:28.368121] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.723 [2024-11-28 02:24:28.368173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.723 [2024-11-28 02:24:28.368183] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.723 [2024-11-28 02:24:28.368194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.723 [2024-11-28 02:24:28.368200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.723 [2024-11-28 02:24:28.368209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.723 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.983 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.983 "name": "Existed_Raid", 00:08:54.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.983 "strip_size_kb": 0, 00:08:54.983 "state": "configuring", 00:08:54.983 "raid_level": "raid1", 00:08:54.983 "superblock": false, 00:08:54.983 "num_base_bdevs": 3, 00:08:54.983 "num_base_bdevs_discovered": 0, 00:08:54.983 "num_base_bdevs_operational": 3, 00:08:54.983 "base_bdevs_list": [ 00:08:54.983 { 00:08:54.983 "name": "BaseBdev1", 00:08:54.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.983 "is_configured": false, 00:08:54.983 "data_offset": 0, 00:08:54.983 "data_size": 0 00:08:54.983 }, 00:08:54.983 { 00:08:54.983 "name": "BaseBdev2", 00:08:54.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.983 "is_configured": false, 00:08:54.983 "data_offset": 0, 00:08:54.983 "data_size": 0 00:08:54.983 }, 00:08:54.983 { 00:08:54.983 "name": "BaseBdev3", 00:08:54.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.983 "is_configured": false, 00:08:54.983 "data_offset": 0, 00:08:54.983 "data_size": 0 00:08:54.983 } 00:08:54.983 ] 00:08:54.983 }' 00:08:54.983 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.983 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.243 [2024-11-28 02:24:28.815331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.243 [2024-11-28 02:24:28.815371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.243 [2024-11-28 02:24:28.827326] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.243 [2024-11-28 02:24:28.827369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.243 [2024-11-28 02:24:28.827379] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.243 [2024-11-28 02:24:28.827388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.243 [2024-11-28 02:24:28.827394] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.243 [2024-11-28 02:24:28.827405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.243 [2024-11-28 02:24:28.875133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.243 BaseBdev1 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.243 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.244 [ 00:08:55.244 { 00:08:55.244 "name": "BaseBdev1", 00:08:55.244 "aliases": [ 00:08:55.244 "2e519aca-c100-4bce-b01c-29b7fa7885d0" 00:08:55.244 ], 00:08:55.244 "product_name": "Malloc disk", 00:08:55.244 "block_size": 512, 00:08:55.244 "num_blocks": 65536, 00:08:55.244 "uuid": "2e519aca-c100-4bce-b01c-29b7fa7885d0", 00:08:55.244 "assigned_rate_limits": { 00:08:55.244 "rw_ios_per_sec": 0, 00:08:55.244 "rw_mbytes_per_sec": 0, 00:08:55.244 "r_mbytes_per_sec": 0, 00:08:55.244 "w_mbytes_per_sec": 0 00:08:55.244 }, 00:08:55.244 "claimed": true, 00:08:55.244 "claim_type": "exclusive_write", 00:08:55.244 "zoned": false, 00:08:55.244 "supported_io_types": { 00:08:55.244 "read": true, 00:08:55.244 "write": true, 00:08:55.244 "unmap": true, 00:08:55.244 "flush": true, 00:08:55.244 "reset": true, 00:08:55.244 "nvme_admin": false, 00:08:55.244 "nvme_io": false, 00:08:55.244 "nvme_io_md": false, 00:08:55.244 "write_zeroes": true, 00:08:55.244 "zcopy": true, 00:08:55.244 "get_zone_info": false, 00:08:55.244 "zone_management": false, 00:08:55.244 "zone_append": false, 00:08:55.244 "compare": false, 00:08:55.244 "compare_and_write": false, 00:08:55.244 "abort": true, 00:08:55.244 "seek_hole": false, 00:08:55.244 "seek_data": false, 00:08:55.244 "copy": true, 00:08:55.244 "nvme_iov_md": false 00:08:55.244 }, 00:08:55.244 "memory_domains": [ 00:08:55.244 { 00:08:55.244 "dma_device_id": "system", 00:08:55.244 "dma_device_type": 1 00:08:55.244 }, 00:08:55.244 { 00:08:55.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.244 "dma_device_type": 2 00:08:55.244 } 00:08:55.244 ], 00:08:55.244 "driver_specific": {} 00:08:55.244 } 00:08:55.244 ] 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.244 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.521 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.521 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.521 "name": "Existed_Raid", 00:08:55.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.521 "strip_size_kb": 0, 00:08:55.521 "state": "configuring", 00:08:55.521 "raid_level": "raid1", 00:08:55.521 "superblock": false, 00:08:55.521 "num_base_bdevs": 3, 00:08:55.521 "num_base_bdevs_discovered": 1, 00:08:55.521 "num_base_bdevs_operational": 3, 00:08:55.521 "base_bdevs_list": [ 00:08:55.521 { 00:08:55.521 "name": "BaseBdev1", 00:08:55.521 "uuid": "2e519aca-c100-4bce-b01c-29b7fa7885d0", 00:08:55.521 "is_configured": true, 00:08:55.521 "data_offset": 0, 00:08:55.522 "data_size": 65536 00:08:55.522 }, 00:08:55.522 { 00:08:55.522 "name": "BaseBdev2", 00:08:55.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.522 "is_configured": false, 00:08:55.522 "data_offset": 0, 00:08:55.522 "data_size": 0 00:08:55.522 }, 00:08:55.522 { 00:08:55.522 "name": "BaseBdev3", 00:08:55.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.522 "is_configured": false, 00:08:55.522 "data_offset": 0, 00:08:55.522 "data_size": 0 00:08:55.522 } 00:08:55.522 ] 00:08:55.522 }' 00:08:55.522 02:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.522 02:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.795 [2024-11-28 02:24:29.358346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.795 [2024-11-28 02:24:29.358403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.795 [2024-11-28 02:24:29.370357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.795 [2024-11-28 02:24:29.372171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.795 [2024-11-28 02:24:29.372209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.795 [2024-11-28 02:24:29.372219] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.795 [2024-11-28 02:24:29.372228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.795 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.796 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.796 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.796 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.796 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.796 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.796 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.796 "name": "Existed_Raid", 00:08:55.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.796 "strip_size_kb": 0, 00:08:55.796 "state": "configuring", 00:08:55.796 "raid_level": "raid1", 00:08:55.796 "superblock": false, 00:08:55.796 "num_base_bdevs": 3, 00:08:55.796 "num_base_bdevs_discovered": 1, 00:08:55.796 "num_base_bdevs_operational": 3, 00:08:55.796 "base_bdevs_list": [ 00:08:55.796 { 00:08:55.796 "name": "BaseBdev1", 00:08:55.796 "uuid": "2e519aca-c100-4bce-b01c-29b7fa7885d0", 00:08:55.796 "is_configured": true, 00:08:55.796 "data_offset": 0, 00:08:55.796 "data_size": 65536 00:08:55.796 }, 00:08:55.796 { 00:08:55.796 "name": "BaseBdev2", 00:08:55.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.796 "is_configured": false, 00:08:55.796 "data_offset": 0, 00:08:55.796 "data_size": 0 00:08:55.796 }, 00:08:55.796 { 00:08:55.796 "name": "BaseBdev3", 00:08:55.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.796 "is_configured": false, 00:08:55.796 "data_offset": 0, 00:08:55.796 "data_size": 0 00:08:55.796 } 00:08:55.796 ] 00:08:55.796 }' 00:08:55.796 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.796 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.364 [2024-11-28 02:24:29.847691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.364 BaseBdev2 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.364 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.364 [ 00:08:56.364 { 00:08:56.365 "name": "BaseBdev2", 00:08:56.365 "aliases": [ 00:08:56.365 "3deae22e-7920-490e-a85a-d8c06c839841" 00:08:56.365 ], 00:08:56.365 "product_name": "Malloc disk", 00:08:56.365 "block_size": 512, 00:08:56.365 "num_blocks": 65536, 00:08:56.365 "uuid": "3deae22e-7920-490e-a85a-d8c06c839841", 00:08:56.365 "assigned_rate_limits": { 00:08:56.365 "rw_ios_per_sec": 0, 00:08:56.365 "rw_mbytes_per_sec": 0, 00:08:56.365 "r_mbytes_per_sec": 0, 00:08:56.365 "w_mbytes_per_sec": 0 00:08:56.365 }, 00:08:56.365 "claimed": true, 00:08:56.365 "claim_type": "exclusive_write", 00:08:56.365 "zoned": false, 00:08:56.365 "supported_io_types": { 00:08:56.365 "read": true, 00:08:56.365 "write": true, 00:08:56.365 "unmap": true, 00:08:56.365 "flush": true, 00:08:56.365 "reset": true, 00:08:56.365 "nvme_admin": false, 00:08:56.365 "nvme_io": false, 00:08:56.365 "nvme_io_md": false, 00:08:56.365 "write_zeroes": true, 00:08:56.365 "zcopy": true, 00:08:56.365 "get_zone_info": false, 00:08:56.365 "zone_management": false, 00:08:56.365 "zone_append": false, 00:08:56.365 "compare": false, 00:08:56.365 "compare_and_write": false, 00:08:56.365 "abort": true, 00:08:56.365 "seek_hole": false, 00:08:56.365 "seek_data": false, 00:08:56.365 "copy": true, 00:08:56.365 "nvme_iov_md": false 00:08:56.365 }, 00:08:56.365 "memory_domains": [ 00:08:56.365 { 00:08:56.365 "dma_device_id": "system", 00:08:56.365 "dma_device_type": 1 00:08:56.365 }, 00:08:56.365 { 00:08:56.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.365 "dma_device_type": 2 00:08:56.365 } 00:08:56.365 ], 00:08:56.365 "driver_specific": {} 00:08:56.365 } 00:08:56.365 ] 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.365 "name": "Existed_Raid", 00:08:56.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.365 "strip_size_kb": 0, 00:08:56.365 "state": "configuring", 00:08:56.365 "raid_level": "raid1", 00:08:56.365 "superblock": false, 00:08:56.365 "num_base_bdevs": 3, 00:08:56.365 "num_base_bdevs_discovered": 2, 00:08:56.365 "num_base_bdevs_operational": 3, 00:08:56.365 "base_bdevs_list": [ 00:08:56.365 { 00:08:56.365 "name": "BaseBdev1", 00:08:56.365 "uuid": "2e519aca-c100-4bce-b01c-29b7fa7885d0", 00:08:56.365 "is_configured": true, 00:08:56.365 "data_offset": 0, 00:08:56.365 "data_size": 65536 00:08:56.365 }, 00:08:56.365 { 00:08:56.365 "name": "BaseBdev2", 00:08:56.365 "uuid": "3deae22e-7920-490e-a85a-d8c06c839841", 00:08:56.365 "is_configured": true, 00:08:56.365 "data_offset": 0, 00:08:56.365 "data_size": 65536 00:08:56.365 }, 00:08:56.365 { 00:08:56.365 "name": "BaseBdev3", 00:08:56.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.365 "is_configured": false, 00:08:56.365 "data_offset": 0, 00:08:56.365 "data_size": 0 00:08:56.365 } 00:08:56.365 ] 00:08:56.365 }' 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.365 02:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.934 [2024-11-28 02:24:30.370595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.934 [2024-11-28 02:24:30.370660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:56.934 [2024-11-28 02:24:30.370675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:56.934 [2024-11-28 02:24:30.371187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:56.934 [2024-11-28 02:24:30.371388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:56.934 [2024-11-28 02:24:30.371403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:56.934 [2024-11-28 02:24:30.371710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.934 BaseBdev3 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.934 [ 00:08:56.934 { 00:08:56.934 "name": "BaseBdev3", 00:08:56.934 "aliases": [ 00:08:56.934 "d33da274-b99d-4256-8603-bfd1749f48b6" 00:08:56.934 ], 00:08:56.934 "product_name": "Malloc disk", 00:08:56.934 "block_size": 512, 00:08:56.934 "num_blocks": 65536, 00:08:56.934 "uuid": "d33da274-b99d-4256-8603-bfd1749f48b6", 00:08:56.934 "assigned_rate_limits": { 00:08:56.934 "rw_ios_per_sec": 0, 00:08:56.934 "rw_mbytes_per_sec": 0, 00:08:56.934 "r_mbytes_per_sec": 0, 00:08:56.934 "w_mbytes_per_sec": 0 00:08:56.934 }, 00:08:56.934 "claimed": true, 00:08:56.934 "claim_type": "exclusive_write", 00:08:56.934 "zoned": false, 00:08:56.934 "supported_io_types": { 00:08:56.934 "read": true, 00:08:56.934 "write": true, 00:08:56.934 "unmap": true, 00:08:56.934 "flush": true, 00:08:56.934 "reset": true, 00:08:56.934 "nvme_admin": false, 00:08:56.934 "nvme_io": false, 00:08:56.934 "nvme_io_md": false, 00:08:56.934 "write_zeroes": true, 00:08:56.934 "zcopy": true, 00:08:56.934 "get_zone_info": false, 00:08:56.934 "zone_management": false, 00:08:56.934 "zone_append": false, 00:08:56.934 "compare": false, 00:08:56.934 "compare_and_write": false, 00:08:56.934 "abort": true, 00:08:56.934 "seek_hole": false, 00:08:56.934 "seek_data": false, 00:08:56.934 "copy": true, 00:08:56.934 "nvme_iov_md": false 00:08:56.934 }, 00:08:56.934 "memory_domains": [ 00:08:56.934 { 00:08:56.934 "dma_device_id": "system", 00:08:56.934 "dma_device_type": 1 00:08:56.934 }, 00:08:56.934 { 00:08:56.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.934 "dma_device_type": 2 00:08:56.934 } 00:08:56.934 ], 00:08:56.934 "driver_specific": {} 00:08:56.934 } 00:08:56.934 ] 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.934 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.934 "name": "Existed_Raid", 00:08:56.934 "uuid": "e9898fb6-7d08-4e7e-8a0a-08e9efd29a92", 00:08:56.934 "strip_size_kb": 0, 00:08:56.934 "state": "online", 00:08:56.934 "raid_level": "raid1", 00:08:56.934 "superblock": false, 00:08:56.934 "num_base_bdevs": 3, 00:08:56.934 "num_base_bdevs_discovered": 3, 00:08:56.934 "num_base_bdevs_operational": 3, 00:08:56.934 "base_bdevs_list": [ 00:08:56.934 { 00:08:56.934 "name": "BaseBdev1", 00:08:56.934 "uuid": "2e519aca-c100-4bce-b01c-29b7fa7885d0", 00:08:56.934 "is_configured": true, 00:08:56.934 "data_offset": 0, 00:08:56.934 "data_size": 65536 00:08:56.934 }, 00:08:56.934 { 00:08:56.935 "name": "BaseBdev2", 00:08:56.935 "uuid": "3deae22e-7920-490e-a85a-d8c06c839841", 00:08:56.935 "is_configured": true, 00:08:56.935 "data_offset": 0, 00:08:56.935 "data_size": 65536 00:08:56.935 }, 00:08:56.935 { 00:08:56.935 "name": "BaseBdev3", 00:08:56.935 "uuid": "d33da274-b99d-4256-8603-bfd1749f48b6", 00:08:56.935 "is_configured": true, 00:08:56.935 "data_offset": 0, 00:08:56.935 "data_size": 65536 00:08:56.935 } 00:08:56.935 ] 00:08:56.935 }' 00:08:56.935 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.935 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.194 [2024-11-28 02:24:30.842382] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.194 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.454 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.454 "name": "Existed_Raid", 00:08:57.454 "aliases": [ 00:08:57.454 "e9898fb6-7d08-4e7e-8a0a-08e9efd29a92" 00:08:57.454 ], 00:08:57.454 "product_name": "Raid Volume", 00:08:57.454 "block_size": 512, 00:08:57.454 "num_blocks": 65536, 00:08:57.454 "uuid": "e9898fb6-7d08-4e7e-8a0a-08e9efd29a92", 00:08:57.454 "assigned_rate_limits": { 00:08:57.454 "rw_ios_per_sec": 0, 00:08:57.454 "rw_mbytes_per_sec": 0, 00:08:57.454 "r_mbytes_per_sec": 0, 00:08:57.454 "w_mbytes_per_sec": 0 00:08:57.454 }, 00:08:57.454 "claimed": false, 00:08:57.454 "zoned": false, 00:08:57.454 "supported_io_types": { 00:08:57.454 "read": true, 00:08:57.454 "write": true, 00:08:57.454 "unmap": false, 00:08:57.454 "flush": false, 00:08:57.454 "reset": true, 00:08:57.454 "nvme_admin": false, 00:08:57.454 "nvme_io": false, 00:08:57.454 "nvme_io_md": false, 00:08:57.454 "write_zeroes": true, 00:08:57.454 "zcopy": false, 00:08:57.454 "get_zone_info": false, 00:08:57.454 "zone_management": false, 00:08:57.454 "zone_append": false, 00:08:57.454 "compare": false, 00:08:57.454 "compare_and_write": false, 00:08:57.454 "abort": false, 00:08:57.454 "seek_hole": false, 00:08:57.454 "seek_data": false, 00:08:57.454 "copy": false, 00:08:57.454 "nvme_iov_md": false 00:08:57.454 }, 00:08:57.454 "memory_domains": [ 00:08:57.454 { 00:08:57.454 "dma_device_id": "system", 00:08:57.454 "dma_device_type": 1 00:08:57.454 }, 00:08:57.454 { 00:08:57.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.454 "dma_device_type": 2 00:08:57.454 }, 00:08:57.454 { 00:08:57.454 "dma_device_id": "system", 00:08:57.454 "dma_device_type": 1 00:08:57.454 }, 00:08:57.454 { 00:08:57.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.454 "dma_device_type": 2 00:08:57.454 }, 00:08:57.454 { 00:08:57.455 "dma_device_id": "system", 00:08:57.455 "dma_device_type": 1 00:08:57.455 }, 00:08:57.455 { 00:08:57.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.455 "dma_device_type": 2 00:08:57.455 } 00:08:57.455 ], 00:08:57.455 "driver_specific": { 00:08:57.455 "raid": { 00:08:57.455 "uuid": "e9898fb6-7d08-4e7e-8a0a-08e9efd29a92", 00:08:57.455 "strip_size_kb": 0, 00:08:57.455 "state": "online", 00:08:57.455 "raid_level": "raid1", 00:08:57.455 "superblock": false, 00:08:57.455 "num_base_bdevs": 3, 00:08:57.455 "num_base_bdevs_discovered": 3, 00:08:57.455 "num_base_bdevs_operational": 3, 00:08:57.455 "base_bdevs_list": [ 00:08:57.455 { 00:08:57.455 "name": "BaseBdev1", 00:08:57.455 "uuid": "2e519aca-c100-4bce-b01c-29b7fa7885d0", 00:08:57.455 "is_configured": true, 00:08:57.455 "data_offset": 0, 00:08:57.455 "data_size": 65536 00:08:57.455 }, 00:08:57.455 { 00:08:57.455 "name": "BaseBdev2", 00:08:57.455 "uuid": "3deae22e-7920-490e-a85a-d8c06c839841", 00:08:57.455 "is_configured": true, 00:08:57.455 "data_offset": 0, 00:08:57.455 "data_size": 65536 00:08:57.455 }, 00:08:57.455 { 00:08:57.455 "name": "BaseBdev3", 00:08:57.455 "uuid": "d33da274-b99d-4256-8603-bfd1749f48b6", 00:08:57.455 "is_configured": true, 00:08:57.455 "data_offset": 0, 00:08:57.455 "data_size": 65536 00:08:57.455 } 00:08:57.455 ] 00:08:57.455 } 00:08:57.455 } 00:08:57.455 }' 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:57.455 BaseBdev2 00:08:57.455 BaseBdev3' 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.455 02:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.455 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.455 [2024-11-28 02:24:31.097688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.714 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.714 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.715 "name": "Existed_Raid", 00:08:57.715 "uuid": "e9898fb6-7d08-4e7e-8a0a-08e9efd29a92", 00:08:57.715 "strip_size_kb": 0, 00:08:57.715 "state": "online", 00:08:57.715 "raid_level": "raid1", 00:08:57.715 "superblock": false, 00:08:57.715 "num_base_bdevs": 3, 00:08:57.715 "num_base_bdevs_discovered": 2, 00:08:57.715 "num_base_bdevs_operational": 2, 00:08:57.715 "base_bdevs_list": [ 00:08:57.715 { 00:08:57.715 "name": null, 00:08:57.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.715 "is_configured": false, 00:08:57.715 "data_offset": 0, 00:08:57.715 "data_size": 65536 00:08:57.715 }, 00:08:57.715 { 00:08:57.715 "name": "BaseBdev2", 00:08:57.715 "uuid": "3deae22e-7920-490e-a85a-d8c06c839841", 00:08:57.715 "is_configured": true, 00:08:57.715 "data_offset": 0, 00:08:57.715 "data_size": 65536 00:08:57.715 }, 00:08:57.715 { 00:08:57.715 "name": "BaseBdev3", 00:08:57.715 "uuid": "d33da274-b99d-4256-8603-bfd1749f48b6", 00:08:57.715 "is_configured": true, 00:08:57.715 "data_offset": 0, 00:08:57.715 "data_size": 65536 00:08:57.715 } 00:08:57.715 ] 00:08:57.715 }' 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.715 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.974 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:57.974 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.974 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.974 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.974 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.974 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.233 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.233 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.233 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.233 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:58.233 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.233 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.233 [2024-11-28 02:24:31.694686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.233 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.233 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.233 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.233 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.234 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.234 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.234 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.234 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.234 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.234 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.234 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:58.234 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.234 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.234 [2024-11-28 02:24:31.858957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.234 [2024-11-28 02:24:31.859088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.493 [2024-11-28 02:24:31.967642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.493 [2024-11-28 02:24:31.967712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.493 [2024-11-28 02:24:31.967727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:58.493 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.493 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.493 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.493 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.493 02:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.493 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.493 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.493 02:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.493 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:58.493 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:58.493 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:58.493 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:58.493 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.493 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.493 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.493 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.493 BaseBdev2 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.494 [ 00:08:58.494 { 00:08:58.494 "name": "BaseBdev2", 00:08:58.494 "aliases": [ 00:08:58.494 "bda5e499-17b3-4316-8e01-377e0cabd8b7" 00:08:58.494 ], 00:08:58.494 "product_name": "Malloc disk", 00:08:58.494 "block_size": 512, 00:08:58.494 "num_blocks": 65536, 00:08:58.494 "uuid": "bda5e499-17b3-4316-8e01-377e0cabd8b7", 00:08:58.494 "assigned_rate_limits": { 00:08:58.494 "rw_ios_per_sec": 0, 00:08:58.494 "rw_mbytes_per_sec": 0, 00:08:58.494 "r_mbytes_per_sec": 0, 00:08:58.494 "w_mbytes_per_sec": 0 00:08:58.494 }, 00:08:58.494 "claimed": false, 00:08:58.494 "zoned": false, 00:08:58.494 "supported_io_types": { 00:08:58.494 "read": true, 00:08:58.494 "write": true, 00:08:58.494 "unmap": true, 00:08:58.494 "flush": true, 00:08:58.494 "reset": true, 00:08:58.494 "nvme_admin": false, 00:08:58.494 "nvme_io": false, 00:08:58.494 "nvme_io_md": false, 00:08:58.494 "write_zeroes": true, 00:08:58.494 "zcopy": true, 00:08:58.494 "get_zone_info": false, 00:08:58.494 "zone_management": false, 00:08:58.494 "zone_append": false, 00:08:58.494 "compare": false, 00:08:58.494 "compare_and_write": false, 00:08:58.494 "abort": true, 00:08:58.494 "seek_hole": false, 00:08:58.494 "seek_data": false, 00:08:58.494 "copy": true, 00:08:58.494 "nvme_iov_md": false 00:08:58.494 }, 00:08:58.494 "memory_domains": [ 00:08:58.494 { 00:08:58.494 "dma_device_id": "system", 00:08:58.494 "dma_device_type": 1 00:08:58.494 }, 00:08:58.494 { 00:08:58.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.494 "dma_device_type": 2 00:08:58.494 } 00:08:58.494 ], 00:08:58.494 "driver_specific": {} 00:08:58.494 } 00:08:58.494 ] 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.494 BaseBdev3 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.494 [ 00:08:58.494 { 00:08:58.494 "name": "BaseBdev3", 00:08:58.494 "aliases": [ 00:08:58.494 "e3e581c2-20df-47a4-8fda-b65d6341e0b7" 00:08:58.494 ], 00:08:58.494 "product_name": "Malloc disk", 00:08:58.494 "block_size": 512, 00:08:58.494 "num_blocks": 65536, 00:08:58.494 "uuid": "e3e581c2-20df-47a4-8fda-b65d6341e0b7", 00:08:58.494 "assigned_rate_limits": { 00:08:58.494 "rw_ios_per_sec": 0, 00:08:58.494 "rw_mbytes_per_sec": 0, 00:08:58.494 "r_mbytes_per_sec": 0, 00:08:58.494 "w_mbytes_per_sec": 0 00:08:58.494 }, 00:08:58.494 "claimed": false, 00:08:58.494 "zoned": false, 00:08:58.494 "supported_io_types": { 00:08:58.494 "read": true, 00:08:58.494 "write": true, 00:08:58.494 "unmap": true, 00:08:58.494 "flush": true, 00:08:58.494 "reset": true, 00:08:58.494 "nvme_admin": false, 00:08:58.494 "nvme_io": false, 00:08:58.494 "nvme_io_md": false, 00:08:58.494 "write_zeroes": true, 00:08:58.494 "zcopy": true, 00:08:58.494 "get_zone_info": false, 00:08:58.494 "zone_management": false, 00:08:58.494 "zone_append": false, 00:08:58.494 "compare": false, 00:08:58.494 "compare_and_write": false, 00:08:58.494 "abort": true, 00:08:58.494 "seek_hole": false, 00:08:58.494 "seek_data": false, 00:08:58.494 "copy": true, 00:08:58.494 "nvme_iov_md": false 00:08:58.494 }, 00:08:58.494 "memory_domains": [ 00:08:58.494 { 00:08:58.494 "dma_device_id": "system", 00:08:58.494 "dma_device_type": 1 00:08:58.494 }, 00:08:58.494 { 00:08:58.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.494 "dma_device_type": 2 00:08:58.494 } 00:08:58.494 ], 00:08:58.494 "driver_specific": {} 00:08:58.494 } 00:08:58.494 ] 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.494 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.752 [2024-11-28 02:24:32.177535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.752 [2024-11-28 02:24:32.177595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.752 [2024-11-28 02:24:32.177616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.752 [2024-11-28 02:24:32.179695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.752 "name": "Existed_Raid", 00:08:58.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.752 "strip_size_kb": 0, 00:08:58.752 "state": "configuring", 00:08:58.752 "raid_level": "raid1", 00:08:58.752 "superblock": false, 00:08:58.752 "num_base_bdevs": 3, 00:08:58.752 "num_base_bdevs_discovered": 2, 00:08:58.752 "num_base_bdevs_operational": 3, 00:08:58.752 "base_bdevs_list": [ 00:08:58.752 { 00:08:58.752 "name": "BaseBdev1", 00:08:58.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.752 "is_configured": false, 00:08:58.752 "data_offset": 0, 00:08:58.752 "data_size": 0 00:08:58.752 }, 00:08:58.752 { 00:08:58.752 "name": "BaseBdev2", 00:08:58.752 "uuid": "bda5e499-17b3-4316-8e01-377e0cabd8b7", 00:08:58.752 "is_configured": true, 00:08:58.752 "data_offset": 0, 00:08:58.752 "data_size": 65536 00:08:58.752 }, 00:08:58.752 { 00:08:58.752 "name": "BaseBdev3", 00:08:58.752 "uuid": "e3e581c2-20df-47a4-8fda-b65d6341e0b7", 00:08:58.752 "is_configured": true, 00:08:58.752 "data_offset": 0, 00:08:58.752 "data_size": 65536 00:08:58.752 } 00:08:58.752 ] 00:08:58.752 }' 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.752 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.010 [2024-11-28 02:24:32.604900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.010 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.010 "name": "Existed_Raid", 00:08:59.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.010 "strip_size_kb": 0, 00:08:59.010 "state": "configuring", 00:08:59.010 "raid_level": "raid1", 00:08:59.010 "superblock": false, 00:08:59.011 "num_base_bdevs": 3, 00:08:59.011 "num_base_bdevs_discovered": 1, 00:08:59.011 "num_base_bdevs_operational": 3, 00:08:59.011 "base_bdevs_list": [ 00:08:59.011 { 00:08:59.011 "name": "BaseBdev1", 00:08:59.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.011 "is_configured": false, 00:08:59.011 "data_offset": 0, 00:08:59.011 "data_size": 0 00:08:59.011 }, 00:08:59.011 { 00:08:59.011 "name": null, 00:08:59.011 "uuid": "bda5e499-17b3-4316-8e01-377e0cabd8b7", 00:08:59.011 "is_configured": false, 00:08:59.011 "data_offset": 0, 00:08:59.011 "data_size": 65536 00:08:59.011 }, 00:08:59.011 { 00:08:59.011 "name": "BaseBdev3", 00:08:59.011 "uuid": "e3e581c2-20df-47a4-8fda-b65d6341e0b7", 00:08:59.011 "is_configured": true, 00:08:59.011 "data_offset": 0, 00:08:59.011 "data_size": 65536 00:08:59.011 } 00:08:59.011 ] 00:08:59.011 }' 00:08:59.011 02:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.011 02:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.578 [2024-11-28 02:24:33.111124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.578 BaseBdev1 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.578 [ 00:08:59.578 { 00:08:59.578 "name": "BaseBdev1", 00:08:59.578 "aliases": [ 00:08:59.578 "d444ff85-a8aa-4652-8478-0f4e3a4874ca" 00:08:59.578 ], 00:08:59.578 "product_name": "Malloc disk", 00:08:59.578 "block_size": 512, 00:08:59.578 "num_blocks": 65536, 00:08:59.578 "uuid": "d444ff85-a8aa-4652-8478-0f4e3a4874ca", 00:08:59.578 "assigned_rate_limits": { 00:08:59.578 "rw_ios_per_sec": 0, 00:08:59.578 "rw_mbytes_per_sec": 0, 00:08:59.578 "r_mbytes_per_sec": 0, 00:08:59.578 "w_mbytes_per_sec": 0 00:08:59.578 }, 00:08:59.578 "claimed": true, 00:08:59.578 "claim_type": "exclusive_write", 00:08:59.578 "zoned": false, 00:08:59.578 "supported_io_types": { 00:08:59.578 "read": true, 00:08:59.578 "write": true, 00:08:59.578 "unmap": true, 00:08:59.578 "flush": true, 00:08:59.578 "reset": true, 00:08:59.578 "nvme_admin": false, 00:08:59.578 "nvme_io": false, 00:08:59.578 "nvme_io_md": false, 00:08:59.578 "write_zeroes": true, 00:08:59.578 "zcopy": true, 00:08:59.578 "get_zone_info": false, 00:08:59.578 "zone_management": false, 00:08:59.578 "zone_append": false, 00:08:59.578 "compare": false, 00:08:59.578 "compare_and_write": false, 00:08:59.578 "abort": true, 00:08:59.578 "seek_hole": false, 00:08:59.578 "seek_data": false, 00:08:59.578 "copy": true, 00:08:59.578 "nvme_iov_md": false 00:08:59.578 }, 00:08:59.578 "memory_domains": [ 00:08:59.578 { 00:08:59.578 "dma_device_id": "system", 00:08:59.578 "dma_device_type": 1 00:08:59.578 }, 00:08:59.578 { 00:08:59.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.578 "dma_device_type": 2 00:08:59.578 } 00:08:59.578 ], 00:08:59.578 "driver_specific": {} 00:08:59.578 } 00:08:59.578 ] 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.578 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.579 "name": "Existed_Raid", 00:08:59.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.579 "strip_size_kb": 0, 00:08:59.579 "state": "configuring", 00:08:59.579 "raid_level": "raid1", 00:08:59.579 "superblock": false, 00:08:59.579 "num_base_bdevs": 3, 00:08:59.579 "num_base_bdevs_discovered": 2, 00:08:59.579 "num_base_bdevs_operational": 3, 00:08:59.579 "base_bdevs_list": [ 00:08:59.579 { 00:08:59.579 "name": "BaseBdev1", 00:08:59.579 "uuid": "d444ff85-a8aa-4652-8478-0f4e3a4874ca", 00:08:59.579 "is_configured": true, 00:08:59.579 "data_offset": 0, 00:08:59.579 "data_size": 65536 00:08:59.579 }, 00:08:59.579 { 00:08:59.579 "name": null, 00:08:59.579 "uuid": "bda5e499-17b3-4316-8e01-377e0cabd8b7", 00:08:59.579 "is_configured": false, 00:08:59.579 "data_offset": 0, 00:08:59.579 "data_size": 65536 00:08:59.579 }, 00:08:59.579 { 00:08:59.579 "name": "BaseBdev3", 00:08:59.579 "uuid": "e3e581c2-20df-47a4-8fda-b65d6341e0b7", 00:08:59.579 "is_configured": true, 00:08:59.579 "data_offset": 0, 00:08:59.579 "data_size": 65536 00:08:59.579 } 00:08:59.579 ] 00:08:59.579 }' 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.579 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.147 [2024-11-28 02:24:33.626398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.147 "name": "Existed_Raid", 00:09:00.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.147 "strip_size_kb": 0, 00:09:00.147 "state": "configuring", 00:09:00.147 "raid_level": "raid1", 00:09:00.147 "superblock": false, 00:09:00.147 "num_base_bdevs": 3, 00:09:00.147 "num_base_bdevs_discovered": 1, 00:09:00.147 "num_base_bdevs_operational": 3, 00:09:00.147 "base_bdevs_list": [ 00:09:00.147 { 00:09:00.147 "name": "BaseBdev1", 00:09:00.147 "uuid": "d444ff85-a8aa-4652-8478-0f4e3a4874ca", 00:09:00.147 "is_configured": true, 00:09:00.147 "data_offset": 0, 00:09:00.147 "data_size": 65536 00:09:00.147 }, 00:09:00.147 { 00:09:00.147 "name": null, 00:09:00.147 "uuid": "bda5e499-17b3-4316-8e01-377e0cabd8b7", 00:09:00.147 "is_configured": false, 00:09:00.147 "data_offset": 0, 00:09:00.147 "data_size": 65536 00:09:00.147 }, 00:09:00.147 { 00:09:00.147 "name": null, 00:09:00.147 "uuid": "e3e581c2-20df-47a4-8fda-b65d6341e0b7", 00:09:00.147 "is_configured": false, 00:09:00.147 "data_offset": 0, 00:09:00.147 "data_size": 65536 00:09:00.147 } 00:09:00.147 ] 00:09:00.147 }' 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.147 02:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.406 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.406 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.406 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.406 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.406 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.665 [2024-11-28 02:24:34.113607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.665 "name": "Existed_Raid", 00:09:00.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.665 "strip_size_kb": 0, 00:09:00.665 "state": "configuring", 00:09:00.665 "raid_level": "raid1", 00:09:00.665 "superblock": false, 00:09:00.665 "num_base_bdevs": 3, 00:09:00.665 "num_base_bdevs_discovered": 2, 00:09:00.665 "num_base_bdevs_operational": 3, 00:09:00.665 "base_bdevs_list": [ 00:09:00.665 { 00:09:00.665 "name": "BaseBdev1", 00:09:00.665 "uuid": "d444ff85-a8aa-4652-8478-0f4e3a4874ca", 00:09:00.665 "is_configured": true, 00:09:00.665 "data_offset": 0, 00:09:00.665 "data_size": 65536 00:09:00.665 }, 00:09:00.665 { 00:09:00.665 "name": null, 00:09:00.665 "uuid": "bda5e499-17b3-4316-8e01-377e0cabd8b7", 00:09:00.665 "is_configured": false, 00:09:00.665 "data_offset": 0, 00:09:00.665 "data_size": 65536 00:09:00.665 }, 00:09:00.665 { 00:09:00.665 "name": "BaseBdev3", 00:09:00.665 "uuid": "e3e581c2-20df-47a4-8fda-b65d6341e0b7", 00:09:00.665 "is_configured": true, 00:09:00.665 "data_offset": 0, 00:09:00.665 "data_size": 65536 00:09:00.665 } 00:09:00.665 ] 00:09:00.665 }' 00:09:00.665 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.666 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.925 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.925 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.925 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.925 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.925 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.925 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:00.925 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.925 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.925 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.185 [2024-11-28 02:24:34.604797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.185 "name": "Existed_Raid", 00:09:01.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.185 "strip_size_kb": 0, 00:09:01.185 "state": "configuring", 00:09:01.185 "raid_level": "raid1", 00:09:01.185 "superblock": false, 00:09:01.185 "num_base_bdevs": 3, 00:09:01.185 "num_base_bdevs_discovered": 1, 00:09:01.185 "num_base_bdevs_operational": 3, 00:09:01.185 "base_bdevs_list": [ 00:09:01.185 { 00:09:01.185 "name": null, 00:09:01.185 "uuid": "d444ff85-a8aa-4652-8478-0f4e3a4874ca", 00:09:01.185 "is_configured": false, 00:09:01.185 "data_offset": 0, 00:09:01.185 "data_size": 65536 00:09:01.185 }, 00:09:01.185 { 00:09:01.185 "name": null, 00:09:01.185 "uuid": "bda5e499-17b3-4316-8e01-377e0cabd8b7", 00:09:01.185 "is_configured": false, 00:09:01.185 "data_offset": 0, 00:09:01.185 "data_size": 65536 00:09:01.185 }, 00:09:01.185 { 00:09:01.185 "name": "BaseBdev3", 00:09:01.185 "uuid": "e3e581c2-20df-47a4-8fda-b65d6341e0b7", 00:09:01.185 "is_configured": true, 00:09:01.185 "data_offset": 0, 00:09:01.185 "data_size": 65536 00:09:01.185 } 00:09:01.185 ] 00:09:01.185 }' 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.185 02:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.444 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.444 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.444 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.444 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.704 [2024-11-28 02:24:35.172440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.704 "name": "Existed_Raid", 00:09:01.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.704 "strip_size_kb": 0, 00:09:01.704 "state": "configuring", 00:09:01.704 "raid_level": "raid1", 00:09:01.704 "superblock": false, 00:09:01.704 "num_base_bdevs": 3, 00:09:01.704 "num_base_bdevs_discovered": 2, 00:09:01.704 "num_base_bdevs_operational": 3, 00:09:01.704 "base_bdevs_list": [ 00:09:01.704 { 00:09:01.704 "name": null, 00:09:01.704 "uuid": "d444ff85-a8aa-4652-8478-0f4e3a4874ca", 00:09:01.704 "is_configured": false, 00:09:01.704 "data_offset": 0, 00:09:01.704 "data_size": 65536 00:09:01.704 }, 00:09:01.704 { 00:09:01.704 "name": "BaseBdev2", 00:09:01.704 "uuid": "bda5e499-17b3-4316-8e01-377e0cabd8b7", 00:09:01.704 "is_configured": true, 00:09:01.704 "data_offset": 0, 00:09:01.704 "data_size": 65536 00:09:01.704 }, 00:09:01.704 { 00:09:01.704 "name": "BaseBdev3", 00:09:01.704 "uuid": "e3e581c2-20df-47a4-8fda-b65d6341e0b7", 00:09:01.704 "is_configured": true, 00:09:01.704 "data_offset": 0, 00:09:01.704 "data_size": 65536 00:09:01.704 } 00:09:01.704 ] 00:09:01.704 }' 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.704 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.964 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d444ff85-a8aa-4652-8478-0f4e3a4874ca 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.224 [2024-11-28 02:24:35.686442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:02.224 [2024-11-28 02:24:35.686502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:02.224 [2024-11-28 02:24:35.686510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:02.224 [2024-11-28 02:24:35.686805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:02.224 [2024-11-28 02:24:35.686989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:02.224 [2024-11-28 02:24:35.687006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:02.224 [2024-11-28 02:24:35.687272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.224 NewBaseBdev 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.224 [ 00:09:02.224 { 00:09:02.224 "name": "NewBaseBdev", 00:09:02.224 "aliases": [ 00:09:02.224 "d444ff85-a8aa-4652-8478-0f4e3a4874ca" 00:09:02.224 ], 00:09:02.224 "product_name": "Malloc disk", 00:09:02.224 "block_size": 512, 00:09:02.224 "num_blocks": 65536, 00:09:02.224 "uuid": "d444ff85-a8aa-4652-8478-0f4e3a4874ca", 00:09:02.224 "assigned_rate_limits": { 00:09:02.224 "rw_ios_per_sec": 0, 00:09:02.224 "rw_mbytes_per_sec": 0, 00:09:02.224 "r_mbytes_per_sec": 0, 00:09:02.224 "w_mbytes_per_sec": 0 00:09:02.224 }, 00:09:02.224 "claimed": true, 00:09:02.224 "claim_type": "exclusive_write", 00:09:02.224 "zoned": false, 00:09:02.224 "supported_io_types": { 00:09:02.224 "read": true, 00:09:02.224 "write": true, 00:09:02.224 "unmap": true, 00:09:02.224 "flush": true, 00:09:02.224 "reset": true, 00:09:02.224 "nvme_admin": false, 00:09:02.224 "nvme_io": false, 00:09:02.224 "nvme_io_md": false, 00:09:02.224 "write_zeroes": true, 00:09:02.224 "zcopy": true, 00:09:02.224 "get_zone_info": false, 00:09:02.224 "zone_management": false, 00:09:02.224 "zone_append": false, 00:09:02.224 "compare": false, 00:09:02.224 "compare_and_write": false, 00:09:02.224 "abort": true, 00:09:02.224 "seek_hole": false, 00:09:02.224 "seek_data": false, 00:09:02.224 "copy": true, 00:09:02.224 "nvme_iov_md": false 00:09:02.224 }, 00:09:02.224 "memory_domains": [ 00:09:02.224 { 00:09:02.224 "dma_device_id": "system", 00:09:02.224 "dma_device_type": 1 00:09:02.224 }, 00:09:02.224 { 00:09:02.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.224 "dma_device_type": 2 00:09:02.224 } 00:09:02.224 ], 00:09:02.224 "driver_specific": {} 00:09:02.224 } 00:09:02.224 ] 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.224 "name": "Existed_Raid", 00:09:02.224 "uuid": "69a3d1f9-669c-42bc-8a9c-c0a353e13c73", 00:09:02.224 "strip_size_kb": 0, 00:09:02.224 "state": "online", 00:09:02.224 "raid_level": "raid1", 00:09:02.224 "superblock": false, 00:09:02.224 "num_base_bdevs": 3, 00:09:02.224 "num_base_bdevs_discovered": 3, 00:09:02.224 "num_base_bdevs_operational": 3, 00:09:02.224 "base_bdevs_list": [ 00:09:02.224 { 00:09:02.224 "name": "NewBaseBdev", 00:09:02.224 "uuid": "d444ff85-a8aa-4652-8478-0f4e3a4874ca", 00:09:02.224 "is_configured": true, 00:09:02.224 "data_offset": 0, 00:09:02.224 "data_size": 65536 00:09:02.224 }, 00:09:02.224 { 00:09:02.224 "name": "BaseBdev2", 00:09:02.224 "uuid": "bda5e499-17b3-4316-8e01-377e0cabd8b7", 00:09:02.224 "is_configured": true, 00:09:02.224 "data_offset": 0, 00:09:02.224 "data_size": 65536 00:09:02.224 }, 00:09:02.224 { 00:09:02.224 "name": "BaseBdev3", 00:09:02.224 "uuid": "e3e581c2-20df-47a4-8fda-b65d6341e0b7", 00:09:02.224 "is_configured": true, 00:09:02.224 "data_offset": 0, 00:09:02.224 "data_size": 65536 00:09:02.224 } 00:09:02.224 ] 00:09:02.224 }' 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.224 02:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.484 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.484 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.484 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.484 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.484 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.484 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.484 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.484 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.484 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.744 [2024-11-28 02:24:36.170098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.744 "name": "Existed_Raid", 00:09:02.744 "aliases": [ 00:09:02.744 "69a3d1f9-669c-42bc-8a9c-c0a353e13c73" 00:09:02.744 ], 00:09:02.744 "product_name": "Raid Volume", 00:09:02.744 "block_size": 512, 00:09:02.744 "num_blocks": 65536, 00:09:02.744 "uuid": "69a3d1f9-669c-42bc-8a9c-c0a353e13c73", 00:09:02.744 "assigned_rate_limits": { 00:09:02.744 "rw_ios_per_sec": 0, 00:09:02.744 "rw_mbytes_per_sec": 0, 00:09:02.744 "r_mbytes_per_sec": 0, 00:09:02.744 "w_mbytes_per_sec": 0 00:09:02.744 }, 00:09:02.744 "claimed": false, 00:09:02.744 "zoned": false, 00:09:02.744 "supported_io_types": { 00:09:02.744 "read": true, 00:09:02.744 "write": true, 00:09:02.744 "unmap": false, 00:09:02.744 "flush": false, 00:09:02.744 "reset": true, 00:09:02.744 "nvme_admin": false, 00:09:02.744 "nvme_io": false, 00:09:02.744 "nvme_io_md": false, 00:09:02.744 "write_zeroes": true, 00:09:02.744 "zcopy": false, 00:09:02.744 "get_zone_info": false, 00:09:02.744 "zone_management": false, 00:09:02.744 "zone_append": false, 00:09:02.744 "compare": false, 00:09:02.744 "compare_and_write": false, 00:09:02.744 "abort": false, 00:09:02.744 "seek_hole": false, 00:09:02.744 "seek_data": false, 00:09:02.744 "copy": false, 00:09:02.744 "nvme_iov_md": false 00:09:02.744 }, 00:09:02.744 "memory_domains": [ 00:09:02.744 { 00:09:02.744 "dma_device_id": "system", 00:09:02.744 "dma_device_type": 1 00:09:02.744 }, 00:09:02.744 { 00:09:02.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.744 "dma_device_type": 2 00:09:02.744 }, 00:09:02.744 { 00:09:02.744 "dma_device_id": "system", 00:09:02.744 "dma_device_type": 1 00:09:02.744 }, 00:09:02.744 { 00:09:02.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.744 "dma_device_type": 2 00:09:02.744 }, 00:09:02.744 { 00:09:02.744 "dma_device_id": "system", 00:09:02.744 "dma_device_type": 1 00:09:02.744 }, 00:09:02.744 { 00:09:02.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.744 "dma_device_type": 2 00:09:02.744 } 00:09:02.744 ], 00:09:02.744 "driver_specific": { 00:09:02.744 "raid": { 00:09:02.744 "uuid": "69a3d1f9-669c-42bc-8a9c-c0a353e13c73", 00:09:02.744 "strip_size_kb": 0, 00:09:02.744 "state": "online", 00:09:02.744 "raid_level": "raid1", 00:09:02.744 "superblock": false, 00:09:02.744 "num_base_bdevs": 3, 00:09:02.744 "num_base_bdevs_discovered": 3, 00:09:02.744 "num_base_bdevs_operational": 3, 00:09:02.744 "base_bdevs_list": [ 00:09:02.744 { 00:09:02.744 "name": "NewBaseBdev", 00:09:02.744 "uuid": "d444ff85-a8aa-4652-8478-0f4e3a4874ca", 00:09:02.744 "is_configured": true, 00:09:02.744 "data_offset": 0, 00:09:02.744 "data_size": 65536 00:09:02.744 }, 00:09:02.744 { 00:09:02.744 "name": "BaseBdev2", 00:09:02.744 "uuid": "bda5e499-17b3-4316-8e01-377e0cabd8b7", 00:09:02.744 "is_configured": true, 00:09:02.744 "data_offset": 0, 00:09:02.744 "data_size": 65536 00:09:02.744 }, 00:09:02.744 { 00:09:02.744 "name": "BaseBdev3", 00:09:02.744 "uuid": "e3e581c2-20df-47a4-8fda-b65d6341e0b7", 00:09:02.744 "is_configured": true, 00:09:02.744 "data_offset": 0, 00:09:02.744 "data_size": 65536 00:09:02.744 } 00:09:02.744 ] 00:09:02.744 } 00:09:02.744 } 00:09:02.744 }' 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:02.744 BaseBdev2 00:09:02.744 BaseBdev3' 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.744 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.745 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.745 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.745 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.745 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.745 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.745 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.745 [2024-11-28 02:24:36.417237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.745 [2024-11-28 02:24:36.417284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.745 [2024-11-28 02:24:36.417367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.745 [2024-11-28 02:24:36.417700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.745 [2024-11-28 02:24:36.417719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:03.004 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.004 02:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67204 00:09:03.004 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67204 ']' 00:09:03.004 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67204 00:09:03.004 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:03.004 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.004 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67204 00:09:03.004 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.004 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.004 killing process with pid 67204 00:09:03.004 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67204' 00:09:03.005 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67204 00:09:03.005 [2024-11-28 02:24:36.453456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.005 02:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67204 00:09:03.264 [2024-11-28 02:24:36.779438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:04.644 00:09:04.644 real 0m10.578s 00:09:04.644 user 0m16.606s 00:09:04.644 sys 0m1.867s 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.644 ************************************ 00:09:04.644 END TEST raid_state_function_test 00:09:04.644 ************************************ 00:09:04.644 02:24:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:04.644 02:24:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:04.644 02:24:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.644 02:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.644 ************************************ 00:09:04.644 START TEST raid_state_function_test_sb 00:09:04.644 ************************************ 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67822 00:09:04.644 Process raid pid: 67822 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67822' 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67822 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67822 ']' 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.644 02:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.644 [2024-11-28 02:24:38.189589] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:04.644 [2024-11-28 02:24:38.189712] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.904 [2024-11-28 02:24:38.370187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.904 [2024-11-28 02:24:38.510346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.163 [2024-11-28 02:24:38.757261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.163 [2024-11-28 02:24:38.757300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 [2024-11-28 02:24:39.065060] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.423 [2024-11-28 02:24:39.065121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.423 [2024-11-28 02:24:39.065138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.423 [2024-11-28 02:24:39.065148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.423 [2024-11-28 02:24:39.065154] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.423 [2024-11-28 02:24:39.065164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.683 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.683 "name": "Existed_Raid", 00:09:05.683 "uuid": "f64e8629-f226-4b3e-b048-dc803159e9b9", 00:09:05.683 "strip_size_kb": 0, 00:09:05.683 "state": "configuring", 00:09:05.683 "raid_level": "raid1", 00:09:05.683 "superblock": true, 00:09:05.683 "num_base_bdevs": 3, 00:09:05.683 "num_base_bdevs_discovered": 0, 00:09:05.683 "num_base_bdevs_operational": 3, 00:09:05.683 "base_bdevs_list": [ 00:09:05.683 { 00:09:05.683 "name": "BaseBdev1", 00:09:05.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.683 "is_configured": false, 00:09:05.683 "data_offset": 0, 00:09:05.683 "data_size": 0 00:09:05.683 }, 00:09:05.683 { 00:09:05.683 "name": "BaseBdev2", 00:09:05.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.683 "is_configured": false, 00:09:05.683 "data_offset": 0, 00:09:05.683 "data_size": 0 00:09:05.683 }, 00:09:05.683 { 00:09:05.683 "name": "BaseBdev3", 00:09:05.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.683 "is_configured": false, 00:09:05.683 "data_offset": 0, 00:09:05.683 "data_size": 0 00:09:05.683 } 00:09:05.683 ] 00:09:05.683 }' 00:09:05.683 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.683 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.943 [2024-11-28 02:24:39.488286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.943 [2024-11-28 02:24:39.488341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.943 [2024-11-28 02:24:39.500227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.943 [2024-11-28 02:24:39.500271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.943 [2024-11-28 02:24:39.500280] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.943 [2024-11-28 02:24:39.500290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.943 [2024-11-28 02:24:39.500296] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.943 [2024-11-28 02:24:39.500305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.943 [2024-11-28 02:24:39.554538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.943 BaseBdev1 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.943 [ 00:09:05.943 { 00:09:05.943 "name": "BaseBdev1", 00:09:05.943 "aliases": [ 00:09:05.943 "0c70c5da-aaa3-4ef0-b311-d1d8d87fccaf" 00:09:05.943 ], 00:09:05.943 "product_name": "Malloc disk", 00:09:05.943 "block_size": 512, 00:09:05.943 "num_blocks": 65536, 00:09:05.943 "uuid": "0c70c5da-aaa3-4ef0-b311-d1d8d87fccaf", 00:09:05.943 "assigned_rate_limits": { 00:09:05.943 "rw_ios_per_sec": 0, 00:09:05.943 "rw_mbytes_per_sec": 0, 00:09:05.943 "r_mbytes_per_sec": 0, 00:09:05.943 "w_mbytes_per_sec": 0 00:09:05.943 }, 00:09:05.943 "claimed": true, 00:09:05.943 "claim_type": "exclusive_write", 00:09:05.943 "zoned": false, 00:09:05.943 "supported_io_types": { 00:09:05.943 "read": true, 00:09:05.943 "write": true, 00:09:05.943 "unmap": true, 00:09:05.943 "flush": true, 00:09:05.943 "reset": true, 00:09:05.943 "nvme_admin": false, 00:09:05.943 "nvme_io": false, 00:09:05.943 "nvme_io_md": false, 00:09:05.943 "write_zeroes": true, 00:09:05.943 "zcopy": true, 00:09:05.943 "get_zone_info": false, 00:09:05.943 "zone_management": false, 00:09:05.943 "zone_append": false, 00:09:05.943 "compare": false, 00:09:05.943 "compare_and_write": false, 00:09:05.943 "abort": true, 00:09:05.943 "seek_hole": false, 00:09:05.943 "seek_data": false, 00:09:05.943 "copy": true, 00:09:05.943 "nvme_iov_md": false 00:09:05.943 }, 00:09:05.943 "memory_domains": [ 00:09:05.943 { 00:09:05.943 "dma_device_id": "system", 00:09:05.943 "dma_device_type": 1 00:09:05.943 }, 00:09:05.943 { 00:09:05.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.943 "dma_device_type": 2 00:09:05.943 } 00:09:05.943 ], 00:09:05.943 "driver_specific": {} 00:09:05.943 } 00:09:05.943 ] 00:09:05.943 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.944 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.203 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.203 "name": "Existed_Raid", 00:09:06.203 "uuid": "5cf698ea-2001-4a53-9a7b-66f75240e197", 00:09:06.203 "strip_size_kb": 0, 00:09:06.203 "state": "configuring", 00:09:06.203 "raid_level": "raid1", 00:09:06.203 "superblock": true, 00:09:06.203 "num_base_bdevs": 3, 00:09:06.203 "num_base_bdevs_discovered": 1, 00:09:06.203 "num_base_bdevs_operational": 3, 00:09:06.203 "base_bdevs_list": [ 00:09:06.203 { 00:09:06.203 "name": "BaseBdev1", 00:09:06.203 "uuid": "0c70c5da-aaa3-4ef0-b311-d1d8d87fccaf", 00:09:06.203 "is_configured": true, 00:09:06.203 "data_offset": 2048, 00:09:06.203 "data_size": 63488 00:09:06.203 }, 00:09:06.203 { 00:09:06.203 "name": "BaseBdev2", 00:09:06.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.203 "is_configured": false, 00:09:06.203 "data_offset": 0, 00:09:06.203 "data_size": 0 00:09:06.203 }, 00:09:06.203 { 00:09:06.203 "name": "BaseBdev3", 00:09:06.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.203 "is_configured": false, 00:09:06.203 "data_offset": 0, 00:09:06.203 "data_size": 0 00:09:06.203 } 00:09:06.203 ] 00:09:06.203 }' 00:09:06.203 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.203 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.463 02:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.463 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.463 02:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.463 [2024-11-28 02:24:39.997825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.463 [2024-11-28 02:24:39.997895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.463 [2024-11-28 02:24:40.009842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.463 [2024-11-28 02:24:40.011956] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.463 [2024-11-28 02:24:40.011996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.463 [2024-11-28 02:24:40.012006] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.463 [2024-11-28 02:24:40.012016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.463 "name": "Existed_Raid", 00:09:06.463 "uuid": "83033e89-b3e9-481d-a253-7f92656b55ab", 00:09:06.463 "strip_size_kb": 0, 00:09:06.463 "state": "configuring", 00:09:06.463 "raid_level": "raid1", 00:09:06.463 "superblock": true, 00:09:06.463 "num_base_bdevs": 3, 00:09:06.463 "num_base_bdevs_discovered": 1, 00:09:06.463 "num_base_bdevs_operational": 3, 00:09:06.463 "base_bdevs_list": [ 00:09:06.463 { 00:09:06.463 "name": "BaseBdev1", 00:09:06.463 "uuid": "0c70c5da-aaa3-4ef0-b311-d1d8d87fccaf", 00:09:06.463 "is_configured": true, 00:09:06.463 "data_offset": 2048, 00:09:06.463 "data_size": 63488 00:09:06.463 }, 00:09:06.463 { 00:09:06.463 "name": "BaseBdev2", 00:09:06.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.463 "is_configured": false, 00:09:06.463 "data_offset": 0, 00:09:06.463 "data_size": 0 00:09:06.463 }, 00:09:06.463 { 00:09:06.463 "name": "BaseBdev3", 00:09:06.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.463 "is_configured": false, 00:09:06.463 "data_offset": 0, 00:09:06.463 "data_size": 0 00:09:06.463 } 00:09:06.463 ] 00:09:06.463 }' 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.463 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.032 [2024-11-28 02:24:40.525928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.032 BaseBdev2 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.032 [ 00:09:07.032 { 00:09:07.032 "name": "BaseBdev2", 00:09:07.032 "aliases": [ 00:09:07.032 "4493b7b0-b150-473f-a780-3724717f1419" 00:09:07.032 ], 00:09:07.032 "product_name": "Malloc disk", 00:09:07.032 "block_size": 512, 00:09:07.032 "num_blocks": 65536, 00:09:07.032 "uuid": "4493b7b0-b150-473f-a780-3724717f1419", 00:09:07.032 "assigned_rate_limits": { 00:09:07.032 "rw_ios_per_sec": 0, 00:09:07.032 "rw_mbytes_per_sec": 0, 00:09:07.032 "r_mbytes_per_sec": 0, 00:09:07.032 "w_mbytes_per_sec": 0 00:09:07.032 }, 00:09:07.032 "claimed": true, 00:09:07.032 "claim_type": "exclusive_write", 00:09:07.032 "zoned": false, 00:09:07.032 "supported_io_types": { 00:09:07.032 "read": true, 00:09:07.032 "write": true, 00:09:07.032 "unmap": true, 00:09:07.032 "flush": true, 00:09:07.032 "reset": true, 00:09:07.032 "nvme_admin": false, 00:09:07.032 "nvme_io": false, 00:09:07.032 "nvme_io_md": false, 00:09:07.032 "write_zeroes": true, 00:09:07.032 "zcopy": true, 00:09:07.032 "get_zone_info": false, 00:09:07.032 "zone_management": false, 00:09:07.032 "zone_append": false, 00:09:07.032 "compare": false, 00:09:07.032 "compare_and_write": false, 00:09:07.032 "abort": true, 00:09:07.032 "seek_hole": false, 00:09:07.032 "seek_data": false, 00:09:07.032 "copy": true, 00:09:07.032 "nvme_iov_md": false 00:09:07.032 }, 00:09:07.032 "memory_domains": [ 00:09:07.032 { 00:09:07.032 "dma_device_id": "system", 00:09:07.032 "dma_device_type": 1 00:09:07.032 }, 00:09:07.032 { 00:09:07.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.032 "dma_device_type": 2 00:09:07.032 } 00:09:07.032 ], 00:09:07.032 "driver_specific": {} 00:09:07.032 } 00:09:07.032 ] 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.032 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.032 "name": "Existed_Raid", 00:09:07.032 "uuid": "83033e89-b3e9-481d-a253-7f92656b55ab", 00:09:07.032 "strip_size_kb": 0, 00:09:07.033 "state": "configuring", 00:09:07.033 "raid_level": "raid1", 00:09:07.033 "superblock": true, 00:09:07.033 "num_base_bdevs": 3, 00:09:07.033 "num_base_bdevs_discovered": 2, 00:09:07.033 "num_base_bdevs_operational": 3, 00:09:07.033 "base_bdevs_list": [ 00:09:07.033 { 00:09:07.033 "name": "BaseBdev1", 00:09:07.033 "uuid": "0c70c5da-aaa3-4ef0-b311-d1d8d87fccaf", 00:09:07.033 "is_configured": true, 00:09:07.033 "data_offset": 2048, 00:09:07.033 "data_size": 63488 00:09:07.033 }, 00:09:07.033 { 00:09:07.033 "name": "BaseBdev2", 00:09:07.033 "uuid": "4493b7b0-b150-473f-a780-3724717f1419", 00:09:07.033 "is_configured": true, 00:09:07.033 "data_offset": 2048, 00:09:07.033 "data_size": 63488 00:09:07.033 }, 00:09:07.033 { 00:09:07.033 "name": "BaseBdev3", 00:09:07.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.033 "is_configured": false, 00:09:07.033 "data_offset": 0, 00:09:07.033 "data_size": 0 00:09:07.033 } 00:09:07.033 ] 00:09:07.033 }' 00:09:07.033 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.033 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.626 02:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.626 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.626 02:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.626 [2024-11-28 02:24:41.040608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.626 [2024-11-28 02:24:41.040901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:07.626 [2024-11-28 02:24:41.040945] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:07.626 [2024-11-28 02:24:41.041266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:07.626 BaseBdev3 00:09:07.626 [2024-11-28 02:24:41.041446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:07.626 [2024-11-28 02:24:41.041460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:07.626 [2024-11-28 02:24:41.041620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.626 [ 00:09:07.626 { 00:09:07.626 "name": "BaseBdev3", 00:09:07.626 "aliases": [ 00:09:07.626 "be3ada0b-5acc-4d6c-8fd0-0f58bae38201" 00:09:07.626 ], 00:09:07.626 "product_name": "Malloc disk", 00:09:07.626 "block_size": 512, 00:09:07.626 "num_blocks": 65536, 00:09:07.626 "uuid": "be3ada0b-5acc-4d6c-8fd0-0f58bae38201", 00:09:07.626 "assigned_rate_limits": { 00:09:07.626 "rw_ios_per_sec": 0, 00:09:07.626 "rw_mbytes_per_sec": 0, 00:09:07.626 "r_mbytes_per_sec": 0, 00:09:07.626 "w_mbytes_per_sec": 0 00:09:07.626 }, 00:09:07.626 "claimed": true, 00:09:07.626 "claim_type": "exclusive_write", 00:09:07.626 "zoned": false, 00:09:07.626 "supported_io_types": { 00:09:07.626 "read": true, 00:09:07.626 "write": true, 00:09:07.626 "unmap": true, 00:09:07.626 "flush": true, 00:09:07.626 "reset": true, 00:09:07.626 "nvme_admin": false, 00:09:07.626 "nvme_io": false, 00:09:07.626 "nvme_io_md": false, 00:09:07.626 "write_zeroes": true, 00:09:07.626 "zcopy": true, 00:09:07.626 "get_zone_info": false, 00:09:07.626 "zone_management": false, 00:09:07.626 "zone_append": false, 00:09:07.626 "compare": false, 00:09:07.626 "compare_and_write": false, 00:09:07.626 "abort": true, 00:09:07.626 "seek_hole": false, 00:09:07.626 "seek_data": false, 00:09:07.626 "copy": true, 00:09:07.626 "nvme_iov_md": false 00:09:07.626 }, 00:09:07.626 "memory_domains": [ 00:09:07.626 { 00:09:07.626 "dma_device_id": "system", 00:09:07.626 "dma_device_type": 1 00:09:07.626 }, 00:09:07.626 { 00:09:07.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.626 "dma_device_type": 2 00:09:07.626 } 00:09:07.626 ], 00:09:07.626 "driver_specific": {} 00:09:07.626 } 00:09:07.626 ] 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.626 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.627 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.627 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.627 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.627 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.627 "name": "Existed_Raid", 00:09:07.627 "uuid": "83033e89-b3e9-481d-a253-7f92656b55ab", 00:09:07.627 "strip_size_kb": 0, 00:09:07.627 "state": "online", 00:09:07.627 "raid_level": "raid1", 00:09:07.627 "superblock": true, 00:09:07.627 "num_base_bdevs": 3, 00:09:07.627 "num_base_bdevs_discovered": 3, 00:09:07.627 "num_base_bdevs_operational": 3, 00:09:07.627 "base_bdevs_list": [ 00:09:07.627 { 00:09:07.627 "name": "BaseBdev1", 00:09:07.627 "uuid": "0c70c5da-aaa3-4ef0-b311-d1d8d87fccaf", 00:09:07.627 "is_configured": true, 00:09:07.627 "data_offset": 2048, 00:09:07.627 "data_size": 63488 00:09:07.627 }, 00:09:07.627 { 00:09:07.627 "name": "BaseBdev2", 00:09:07.627 "uuid": "4493b7b0-b150-473f-a780-3724717f1419", 00:09:07.627 "is_configured": true, 00:09:07.627 "data_offset": 2048, 00:09:07.627 "data_size": 63488 00:09:07.627 }, 00:09:07.627 { 00:09:07.627 "name": "BaseBdev3", 00:09:07.627 "uuid": "be3ada0b-5acc-4d6c-8fd0-0f58bae38201", 00:09:07.627 "is_configured": true, 00:09:07.627 "data_offset": 2048, 00:09:07.627 "data_size": 63488 00:09:07.627 } 00:09:07.627 ] 00:09:07.627 }' 00:09:07.627 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.627 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 [2024-11-28 02:24:41.520213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.886 "name": "Existed_Raid", 00:09:07.886 "aliases": [ 00:09:07.886 "83033e89-b3e9-481d-a253-7f92656b55ab" 00:09:07.886 ], 00:09:07.886 "product_name": "Raid Volume", 00:09:07.886 "block_size": 512, 00:09:07.886 "num_blocks": 63488, 00:09:07.886 "uuid": "83033e89-b3e9-481d-a253-7f92656b55ab", 00:09:07.886 "assigned_rate_limits": { 00:09:07.886 "rw_ios_per_sec": 0, 00:09:07.886 "rw_mbytes_per_sec": 0, 00:09:07.886 "r_mbytes_per_sec": 0, 00:09:07.886 "w_mbytes_per_sec": 0 00:09:07.886 }, 00:09:07.886 "claimed": false, 00:09:07.886 "zoned": false, 00:09:07.886 "supported_io_types": { 00:09:07.886 "read": true, 00:09:07.886 "write": true, 00:09:07.886 "unmap": false, 00:09:07.886 "flush": false, 00:09:07.886 "reset": true, 00:09:07.886 "nvme_admin": false, 00:09:07.886 "nvme_io": false, 00:09:07.886 "nvme_io_md": false, 00:09:07.886 "write_zeroes": true, 00:09:07.886 "zcopy": false, 00:09:07.886 "get_zone_info": false, 00:09:07.886 "zone_management": false, 00:09:07.886 "zone_append": false, 00:09:07.886 "compare": false, 00:09:07.886 "compare_and_write": false, 00:09:07.886 "abort": false, 00:09:07.886 "seek_hole": false, 00:09:07.886 "seek_data": false, 00:09:07.886 "copy": false, 00:09:07.886 "nvme_iov_md": false 00:09:07.886 }, 00:09:07.886 "memory_domains": [ 00:09:07.886 { 00:09:07.886 "dma_device_id": "system", 00:09:07.886 "dma_device_type": 1 00:09:07.886 }, 00:09:07.886 { 00:09:07.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.886 "dma_device_type": 2 00:09:07.886 }, 00:09:07.886 { 00:09:07.886 "dma_device_id": "system", 00:09:07.886 "dma_device_type": 1 00:09:07.886 }, 00:09:07.886 { 00:09:07.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.886 "dma_device_type": 2 00:09:07.886 }, 00:09:07.886 { 00:09:07.886 "dma_device_id": "system", 00:09:07.886 "dma_device_type": 1 00:09:07.886 }, 00:09:07.886 { 00:09:07.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.886 "dma_device_type": 2 00:09:07.886 } 00:09:07.886 ], 00:09:07.886 "driver_specific": { 00:09:07.886 "raid": { 00:09:07.886 "uuid": "83033e89-b3e9-481d-a253-7f92656b55ab", 00:09:07.887 "strip_size_kb": 0, 00:09:07.887 "state": "online", 00:09:07.887 "raid_level": "raid1", 00:09:07.887 "superblock": true, 00:09:07.887 "num_base_bdevs": 3, 00:09:07.887 "num_base_bdevs_discovered": 3, 00:09:07.887 "num_base_bdevs_operational": 3, 00:09:07.887 "base_bdevs_list": [ 00:09:07.887 { 00:09:07.887 "name": "BaseBdev1", 00:09:07.887 "uuid": "0c70c5da-aaa3-4ef0-b311-d1d8d87fccaf", 00:09:07.887 "is_configured": true, 00:09:07.887 "data_offset": 2048, 00:09:07.887 "data_size": 63488 00:09:07.887 }, 00:09:07.887 { 00:09:07.887 "name": "BaseBdev2", 00:09:07.887 "uuid": "4493b7b0-b150-473f-a780-3724717f1419", 00:09:07.887 "is_configured": true, 00:09:07.887 "data_offset": 2048, 00:09:07.887 "data_size": 63488 00:09:07.887 }, 00:09:07.887 { 00:09:07.887 "name": "BaseBdev3", 00:09:07.887 "uuid": "be3ada0b-5acc-4d6c-8fd0-0f58bae38201", 00:09:07.887 "is_configured": true, 00:09:07.887 "data_offset": 2048, 00:09:07.887 "data_size": 63488 00:09:07.887 } 00:09:07.887 ] 00:09:07.887 } 00:09:07.887 } 00:09:07.887 }' 00:09:07.887 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.146 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:08.146 BaseBdev2 00:09:08.146 BaseBdev3' 00:09:08.146 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.146 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.146 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.146 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:08.146 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.146 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.147 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.147 [2024-11-28 02:24:41.739528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.407 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.407 "name": "Existed_Raid", 00:09:08.407 "uuid": "83033e89-b3e9-481d-a253-7f92656b55ab", 00:09:08.407 "strip_size_kb": 0, 00:09:08.407 "state": "online", 00:09:08.407 "raid_level": "raid1", 00:09:08.407 "superblock": true, 00:09:08.407 "num_base_bdevs": 3, 00:09:08.407 "num_base_bdevs_discovered": 2, 00:09:08.407 "num_base_bdevs_operational": 2, 00:09:08.407 "base_bdevs_list": [ 00:09:08.407 { 00:09:08.407 "name": null, 00:09:08.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.407 "is_configured": false, 00:09:08.407 "data_offset": 0, 00:09:08.407 "data_size": 63488 00:09:08.407 }, 00:09:08.407 { 00:09:08.407 "name": "BaseBdev2", 00:09:08.407 "uuid": "4493b7b0-b150-473f-a780-3724717f1419", 00:09:08.407 "is_configured": true, 00:09:08.408 "data_offset": 2048, 00:09:08.408 "data_size": 63488 00:09:08.408 }, 00:09:08.408 { 00:09:08.408 "name": "BaseBdev3", 00:09:08.408 "uuid": "be3ada0b-5acc-4d6c-8fd0-0f58bae38201", 00:09:08.408 "is_configured": true, 00:09:08.408 "data_offset": 2048, 00:09:08.408 "data_size": 63488 00:09:08.408 } 00:09:08.408 ] 00:09:08.408 }' 00:09:08.408 02:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.408 02:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.668 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.668 [2024-11-28 02:24:42.311996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.928 [2024-11-28 02:24:42.471742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.928 [2024-11-28 02:24:42.471877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.928 [2024-11-28 02:24:42.580585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.928 [2024-11-28 02:24:42.580649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.928 [2024-11-28 02:24:42.580663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:08.928 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.188 BaseBdev2 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:09.188 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.189 [ 00:09:09.189 { 00:09:09.189 "name": "BaseBdev2", 00:09:09.189 "aliases": [ 00:09:09.189 "487cd976-78b1-4bbb-bd9e-44333fe2e7ff" 00:09:09.189 ], 00:09:09.189 "product_name": "Malloc disk", 00:09:09.189 "block_size": 512, 00:09:09.189 "num_blocks": 65536, 00:09:09.189 "uuid": "487cd976-78b1-4bbb-bd9e-44333fe2e7ff", 00:09:09.189 "assigned_rate_limits": { 00:09:09.189 "rw_ios_per_sec": 0, 00:09:09.189 "rw_mbytes_per_sec": 0, 00:09:09.189 "r_mbytes_per_sec": 0, 00:09:09.189 "w_mbytes_per_sec": 0 00:09:09.189 }, 00:09:09.189 "claimed": false, 00:09:09.189 "zoned": false, 00:09:09.189 "supported_io_types": { 00:09:09.189 "read": true, 00:09:09.189 "write": true, 00:09:09.189 "unmap": true, 00:09:09.189 "flush": true, 00:09:09.189 "reset": true, 00:09:09.189 "nvme_admin": false, 00:09:09.189 "nvme_io": false, 00:09:09.189 "nvme_io_md": false, 00:09:09.189 "write_zeroes": true, 00:09:09.189 "zcopy": true, 00:09:09.189 "get_zone_info": false, 00:09:09.189 "zone_management": false, 00:09:09.189 "zone_append": false, 00:09:09.189 "compare": false, 00:09:09.189 "compare_and_write": false, 00:09:09.189 "abort": true, 00:09:09.189 "seek_hole": false, 00:09:09.189 "seek_data": false, 00:09:09.189 "copy": true, 00:09:09.189 "nvme_iov_md": false 00:09:09.189 }, 00:09:09.189 "memory_domains": [ 00:09:09.189 { 00:09:09.189 "dma_device_id": "system", 00:09:09.189 "dma_device_type": 1 00:09:09.189 }, 00:09:09.189 { 00:09:09.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.189 "dma_device_type": 2 00:09:09.189 } 00:09:09.189 ], 00:09:09.189 "driver_specific": {} 00:09:09.189 } 00:09:09.189 ] 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.189 BaseBdev3 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.189 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.189 [ 00:09:09.189 { 00:09:09.189 "name": "BaseBdev3", 00:09:09.189 "aliases": [ 00:09:09.189 "75a4ec5d-06ba-40a3-b197-4b7c4c691de1" 00:09:09.189 ], 00:09:09.189 "product_name": "Malloc disk", 00:09:09.189 "block_size": 512, 00:09:09.189 "num_blocks": 65536, 00:09:09.189 "uuid": "75a4ec5d-06ba-40a3-b197-4b7c4c691de1", 00:09:09.189 "assigned_rate_limits": { 00:09:09.189 "rw_ios_per_sec": 0, 00:09:09.189 "rw_mbytes_per_sec": 0, 00:09:09.189 "r_mbytes_per_sec": 0, 00:09:09.189 "w_mbytes_per_sec": 0 00:09:09.189 }, 00:09:09.189 "claimed": false, 00:09:09.189 "zoned": false, 00:09:09.189 "supported_io_types": { 00:09:09.189 "read": true, 00:09:09.189 "write": true, 00:09:09.189 "unmap": true, 00:09:09.189 "flush": true, 00:09:09.189 "reset": true, 00:09:09.189 "nvme_admin": false, 00:09:09.189 "nvme_io": false, 00:09:09.189 "nvme_io_md": false, 00:09:09.189 "write_zeroes": true, 00:09:09.189 "zcopy": true, 00:09:09.189 "get_zone_info": false, 00:09:09.189 "zone_management": false, 00:09:09.189 "zone_append": false, 00:09:09.189 "compare": false, 00:09:09.189 "compare_and_write": false, 00:09:09.189 "abort": true, 00:09:09.189 "seek_hole": false, 00:09:09.189 "seek_data": false, 00:09:09.189 "copy": true, 00:09:09.189 "nvme_iov_md": false 00:09:09.189 }, 00:09:09.189 "memory_domains": [ 00:09:09.189 { 00:09:09.189 "dma_device_id": "system", 00:09:09.189 "dma_device_type": 1 00:09:09.189 }, 00:09:09.189 { 00:09:09.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.190 "dma_device_type": 2 00:09:09.190 } 00:09:09.190 ], 00:09:09.190 "driver_specific": {} 00:09:09.190 } 00:09:09.190 ] 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.190 [2024-11-28 02:24:42.800567] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.190 [2024-11-28 02:24:42.800620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.190 [2024-11-28 02:24:42.800643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.190 [2024-11-28 02:24:42.802729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.190 "name": "Existed_Raid", 00:09:09.190 "uuid": "312ac554-8a7e-4d9f-a273-bb0989aadb2b", 00:09:09.190 "strip_size_kb": 0, 00:09:09.190 "state": "configuring", 00:09:09.190 "raid_level": "raid1", 00:09:09.190 "superblock": true, 00:09:09.190 "num_base_bdevs": 3, 00:09:09.190 "num_base_bdevs_discovered": 2, 00:09:09.190 "num_base_bdevs_operational": 3, 00:09:09.190 "base_bdevs_list": [ 00:09:09.190 { 00:09:09.190 "name": "BaseBdev1", 00:09:09.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.190 "is_configured": false, 00:09:09.190 "data_offset": 0, 00:09:09.190 "data_size": 0 00:09:09.190 }, 00:09:09.190 { 00:09:09.190 "name": "BaseBdev2", 00:09:09.190 "uuid": "487cd976-78b1-4bbb-bd9e-44333fe2e7ff", 00:09:09.190 "is_configured": true, 00:09:09.190 "data_offset": 2048, 00:09:09.190 "data_size": 63488 00:09:09.190 }, 00:09:09.190 { 00:09:09.190 "name": "BaseBdev3", 00:09:09.190 "uuid": "75a4ec5d-06ba-40a3-b197-4b7c4c691de1", 00:09:09.190 "is_configured": true, 00:09:09.190 "data_offset": 2048, 00:09:09.190 "data_size": 63488 00:09:09.190 } 00:09:09.190 ] 00:09:09.190 }' 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.190 02:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.758 [2024-11-28 02:24:43.211909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.758 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.759 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.759 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.759 "name": "Existed_Raid", 00:09:09.759 "uuid": "312ac554-8a7e-4d9f-a273-bb0989aadb2b", 00:09:09.759 "strip_size_kb": 0, 00:09:09.759 "state": "configuring", 00:09:09.759 "raid_level": "raid1", 00:09:09.759 "superblock": true, 00:09:09.759 "num_base_bdevs": 3, 00:09:09.759 "num_base_bdevs_discovered": 1, 00:09:09.759 "num_base_bdevs_operational": 3, 00:09:09.759 "base_bdevs_list": [ 00:09:09.759 { 00:09:09.759 "name": "BaseBdev1", 00:09:09.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.759 "is_configured": false, 00:09:09.759 "data_offset": 0, 00:09:09.759 "data_size": 0 00:09:09.759 }, 00:09:09.759 { 00:09:09.759 "name": null, 00:09:09.759 "uuid": "487cd976-78b1-4bbb-bd9e-44333fe2e7ff", 00:09:09.759 "is_configured": false, 00:09:09.759 "data_offset": 0, 00:09:09.759 "data_size": 63488 00:09:09.759 }, 00:09:09.759 { 00:09:09.759 "name": "BaseBdev3", 00:09:09.759 "uuid": "75a4ec5d-06ba-40a3-b197-4b7c4c691de1", 00:09:09.759 "is_configured": true, 00:09:09.759 "data_offset": 2048, 00:09:09.759 "data_size": 63488 00:09:09.759 } 00:09:09.759 ] 00:09:09.759 }' 00:09:09.759 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.759 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.018 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.018 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.018 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.018 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:10.018 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.018 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:10.018 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:10.018 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.018 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.277 [2024-11-28 02:24:43.726158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.277 BaseBdev1 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.277 [ 00:09:10.277 { 00:09:10.277 "name": "BaseBdev1", 00:09:10.277 "aliases": [ 00:09:10.277 "7091caa8-afc5-4e41-9884-a4bf074f0afd" 00:09:10.277 ], 00:09:10.277 "product_name": "Malloc disk", 00:09:10.277 "block_size": 512, 00:09:10.277 "num_blocks": 65536, 00:09:10.277 "uuid": "7091caa8-afc5-4e41-9884-a4bf074f0afd", 00:09:10.277 "assigned_rate_limits": { 00:09:10.277 "rw_ios_per_sec": 0, 00:09:10.277 "rw_mbytes_per_sec": 0, 00:09:10.277 "r_mbytes_per_sec": 0, 00:09:10.277 "w_mbytes_per_sec": 0 00:09:10.277 }, 00:09:10.277 "claimed": true, 00:09:10.277 "claim_type": "exclusive_write", 00:09:10.277 "zoned": false, 00:09:10.277 "supported_io_types": { 00:09:10.277 "read": true, 00:09:10.277 "write": true, 00:09:10.277 "unmap": true, 00:09:10.277 "flush": true, 00:09:10.277 "reset": true, 00:09:10.277 "nvme_admin": false, 00:09:10.277 "nvme_io": false, 00:09:10.277 "nvme_io_md": false, 00:09:10.277 "write_zeroes": true, 00:09:10.277 "zcopy": true, 00:09:10.277 "get_zone_info": false, 00:09:10.277 "zone_management": false, 00:09:10.277 "zone_append": false, 00:09:10.277 "compare": false, 00:09:10.277 "compare_and_write": false, 00:09:10.277 "abort": true, 00:09:10.277 "seek_hole": false, 00:09:10.277 "seek_data": false, 00:09:10.277 "copy": true, 00:09:10.277 "nvme_iov_md": false 00:09:10.277 }, 00:09:10.277 "memory_domains": [ 00:09:10.277 { 00:09:10.277 "dma_device_id": "system", 00:09:10.277 "dma_device_type": 1 00:09:10.277 }, 00:09:10.277 { 00:09:10.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.277 "dma_device_type": 2 00:09:10.277 } 00:09:10.277 ], 00:09:10.277 "driver_specific": {} 00:09:10.277 } 00:09:10.277 ] 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.277 "name": "Existed_Raid", 00:09:10.277 "uuid": "312ac554-8a7e-4d9f-a273-bb0989aadb2b", 00:09:10.277 "strip_size_kb": 0, 00:09:10.277 "state": "configuring", 00:09:10.277 "raid_level": "raid1", 00:09:10.277 "superblock": true, 00:09:10.277 "num_base_bdevs": 3, 00:09:10.277 "num_base_bdevs_discovered": 2, 00:09:10.277 "num_base_bdevs_operational": 3, 00:09:10.277 "base_bdevs_list": [ 00:09:10.277 { 00:09:10.277 "name": "BaseBdev1", 00:09:10.277 "uuid": "7091caa8-afc5-4e41-9884-a4bf074f0afd", 00:09:10.277 "is_configured": true, 00:09:10.277 "data_offset": 2048, 00:09:10.277 "data_size": 63488 00:09:10.277 }, 00:09:10.277 { 00:09:10.277 "name": null, 00:09:10.277 "uuid": "487cd976-78b1-4bbb-bd9e-44333fe2e7ff", 00:09:10.277 "is_configured": false, 00:09:10.277 "data_offset": 0, 00:09:10.277 "data_size": 63488 00:09:10.277 }, 00:09:10.277 { 00:09:10.277 "name": "BaseBdev3", 00:09:10.277 "uuid": "75a4ec5d-06ba-40a3-b197-4b7c4c691de1", 00:09:10.277 "is_configured": true, 00:09:10.277 "data_offset": 2048, 00:09:10.277 "data_size": 63488 00:09:10.277 } 00:09:10.277 ] 00:09:10.277 }' 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.277 02:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 [2024-11-28 02:24:44.297247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.847 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.847 "name": "Existed_Raid", 00:09:10.847 "uuid": "312ac554-8a7e-4d9f-a273-bb0989aadb2b", 00:09:10.848 "strip_size_kb": 0, 00:09:10.848 "state": "configuring", 00:09:10.848 "raid_level": "raid1", 00:09:10.848 "superblock": true, 00:09:10.848 "num_base_bdevs": 3, 00:09:10.848 "num_base_bdevs_discovered": 1, 00:09:10.848 "num_base_bdevs_operational": 3, 00:09:10.848 "base_bdevs_list": [ 00:09:10.848 { 00:09:10.848 "name": "BaseBdev1", 00:09:10.848 "uuid": "7091caa8-afc5-4e41-9884-a4bf074f0afd", 00:09:10.848 "is_configured": true, 00:09:10.848 "data_offset": 2048, 00:09:10.848 "data_size": 63488 00:09:10.848 }, 00:09:10.848 { 00:09:10.848 "name": null, 00:09:10.848 "uuid": "487cd976-78b1-4bbb-bd9e-44333fe2e7ff", 00:09:10.848 "is_configured": false, 00:09:10.848 "data_offset": 0, 00:09:10.848 "data_size": 63488 00:09:10.848 }, 00:09:10.848 { 00:09:10.848 "name": null, 00:09:10.848 "uuid": "75a4ec5d-06ba-40a3-b197-4b7c4c691de1", 00:09:10.848 "is_configured": false, 00:09:10.848 "data_offset": 0, 00:09:10.848 "data_size": 63488 00:09:10.848 } 00:09:10.848 ] 00:09:10.848 }' 00:09:10.848 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.848 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.107 [2024-11-28 02:24:44.776424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.107 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.366 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.366 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.367 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.367 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.367 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.367 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.367 "name": "Existed_Raid", 00:09:11.367 "uuid": "312ac554-8a7e-4d9f-a273-bb0989aadb2b", 00:09:11.367 "strip_size_kb": 0, 00:09:11.367 "state": "configuring", 00:09:11.367 "raid_level": "raid1", 00:09:11.367 "superblock": true, 00:09:11.367 "num_base_bdevs": 3, 00:09:11.367 "num_base_bdevs_discovered": 2, 00:09:11.367 "num_base_bdevs_operational": 3, 00:09:11.367 "base_bdevs_list": [ 00:09:11.367 { 00:09:11.367 "name": "BaseBdev1", 00:09:11.367 "uuid": "7091caa8-afc5-4e41-9884-a4bf074f0afd", 00:09:11.367 "is_configured": true, 00:09:11.367 "data_offset": 2048, 00:09:11.367 "data_size": 63488 00:09:11.367 }, 00:09:11.367 { 00:09:11.367 "name": null, 00:09:11.367 "uuid": "487cd976-78b1-4bbb-bd9e-44333fe2e7ff", 00:09:11.367 "is_configured": false, 00:09:11.367 "data_offset": 0, 00:09:11.367 "data_size": 63488 00:09:11.367 }, 00:09:11.367 { 00:09:11.367 "name": "BaseBdev3", 00:09:11.367 "uuid": "75a4ec5d-06ba-40a3-b197-4b7c4c691de1", 00:09:11.367 "is_configured": true, 00:09:11.367 "data_offset": 2048, 00:09:11.367 "data_size": 63488 00:09:11.367 } 00:09:11.367 ] 00:09:11.367 }' 00:09:11.367 02:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.367 02:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.627 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.627 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.627 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.627 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.627 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.627 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.627 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.627 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.627 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.627 [2024-11-28 02:24:45.275641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.891 "name": "Existed_Raid", 00:09:11.891 "uuid": "312ac554-8a7e-4d9f-a273-bb0989aadb2b", 00:09:11.891 "strip_size_kb": 0, 00:09:11.891 "state": "configuring", 00:09:11.891 "raid_level": "raid1", 00:09:11.891 "superblock": true, 00:09:11.891 "num_base_bdevs": 3, 00:09:11.891 "num_base_bdevs_discovered": 1, 00:09:11.891 "num_base_bdevs_operational": 3, 00:09:11.891 "base_bdevs_list": [ 00:09:11.891 { 00:09:11.891 "name": null, 00:09:11.891 "uuid": "7091caa8-afc5-4e41-9884-a4bf074f0afd", 00:09:11.891 "is_configured": false, 00:09:11.891 "data_offset": 0, 00:09:11.891 "data_size": 63488 00:09:11.891 }, 00:09:11.891 { 00:09:11.891 "name": null, 00:09:11.891 "uuid": "487cd976-78b1-4bbb-bd9e-44333fe2e7ff", 00:09:11.891 "is_configured": false, 00:09:11.891 "data_offset": 0, 00:09:11.891 "data_size": 63488 00:09:11.891 }, 00:09:11.891 { 00:09:11.891 "name": "BaseBdev3", 00:09:11.891 "uuid": "75a4ec5d-06ba-40a3-b197-4b7c4c691de1", 00:09:11.891 "is_configured": true, 00:09:11.891 "data_offset": 2048, 00:09:11.891 "data_size": 63488 00:09:11.891 } 00:09:11.891 ] 00:09:11.891 }' 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.891 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.149 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.149 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.149 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:12.149 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.149 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.149 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:12.149 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:12.149 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.149 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.408 [2024-11-28 02:24:45.833869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.408 "name": "Existed_Raid", 00:09:12.408 "uuid": "312ac554-8a7e-4d9f-a273-bb0989aadb2b", 00:09:12.408 "strip_size_kb": 0, 00:09:12.408 "state": "configuring", 00:09:12.408 "raid_level": "raid1", 00:09:12.408 "superblock": true, 00:09:12.408 "num_base_bdevs": 3, 00:09:12.408 "num_base_bdevs_discovered": 2, 00:09:12.408 "num_base_bdevs_operational": 3, 00:09:12.408 "base_bdevs_list": [ 00:09:12.408 { 00:09:12.408 "name": null, 00:09:12.408 "uuid": "7091caa8-afc5-4e41-9884-a4bf074f0afd", 00:09:12.408 "is_configured": false, 00:09:12.408 "data_offset": 0, 00:09:12.408 "data_size": 63488 00:09:12.408 }, 00:09:12.408 { 00:09:12.408 "name": "BaseBdev2", 00:09:12.408 "uuid": "487cd976-78b1-4bbb-bd9e-44333fe2e7ff", 00:09:12.408 "is_configured": true, 00:09:12.408 "data_offset": 2048, 00:09:12.408 "data_size": 63488 00:09:12.408 }, 00:09:12.408 { 00:09:12.408 "name": "BaseBdev3", 00:09:12.408 "uuid": "75a4ec5d-06ba-40a3-b197-4b7c4c691de1", 00:09:12.408 "is_configured": true, 00:09:12.408 "data_offset": 2048, 00:09:12.408 "data_size": 63488 00:09:12.408 } 00:09:12.408 ] 00:09:12.408 }' 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.408 02:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.666 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.666 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.666 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.666 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.666 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7091caa8-afc5-4e41-9884-a4bf074f0afd 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.925 [2024-11-28 02:24:46.413676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:12.925 [2024-11-28 02:24:46.413905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:12.925 [2024-11-28 02:24:46.413935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:12.925 [2024-11-28 02:24:46.414196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.925 [2024-11-28 02:24:46.414338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:12.925 [2024-11-28 02:24:46.414355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:12.925 [2024-11-28 02:24:46.414478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.925 NewBaseBdev 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.925 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.926 [ 00:09:12.926 { 00:09:12.926 "name": "NewBaseBdev", 00:09:12.926 "aliases": [ 00:09:12.926 "7091caa8-afc5-4e41-9884-a4bf074f0afd" 00:09:12.926 ], 00:09:12.926 "product_name": "Malloc disk", 00:09:12.926 "block_size": 512, 00:09:12.926 "num_blocks": 65536, 00:09:12.926 "uuid": "7091caa8-afc5-4e41-9884-a4bf074f0afd", 00:09:12.926 "assigned_rate_limits": { 00:09:12.926 "rw_ios_per_sec": 0, 00:09:12.926 "rw_mbytes_per_sec": 0, 00:09:12.926 "r_mbytes_per_sec": 0, 00:09:12.926 "w_mbytes_per_sec": 0 00:09:12.926 }, 00:09:12.926 "claimed": true, 00:09:12.926 "claim_type": "exclusive_write", 00:09:12.926 "zoned": false, 00:09:12.926 "supported_io_types": { 00:09:12.926 "read": true, 00:09:12.926 "write": true, 00:09:12.926 "unmap": true, 00:09:12.926 "flush": true, 00:09:12.926 "reset": true, 00:09:12.926 "nvme_admin": false, 00:09:12.926 "nvme_io": false, 00:09:12.926 "nvme_io_md": false, 00:09:12.926 "write_zeroes": true, 00:09:12.926 "zcopy": true, 00:09:12.926 "get_zone_info": false, 00:09:12.926 "zone_management": false, 00:09:12.926 "zone_append": false, 00:09:12.926 "compare": false, 00:09:12.926 "compare_and_write": false, 00:09:12.926 "abort": true, 00:09:12.926 "seek_hole": false, 00:09:12.926 "seek_data": false, 00:09:12.926 "copy": true, 00:09:12.926 "nvme_iov_md": false 00:09:12.926 }, 00:09:12.926 "memory_domains": [ 00:09:12.926 { 00:09:12.926 "dma_device_id": "system", 00:09:12.926 "dma_device_type": 1 00:09:12.926 }, 00:09:12.926 { 00:09:12.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.926 "dma_device_type": 2 00:09:12.926 } 00:09:12.926 ], 00:09:12.926 "driver_specific": {} 00:09:12.926 } 00:09:12.926 ] 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.926 "name": "Existed_Raid", 00:09:12.926 "uuid": "312ac554-8a7e-4d9f-a273-bb0989aadb2b", 00:09:12.926 "strip_size_kb": 0, 00:09:12.926 "state": "online", 00:09:12.926 "raid_level": "raid1", 00:09:12.926 "superblock": true, 00:09:12.926 "num_base_bdevs": 3, 00:09:12.926 "num_base_bdevs_discovered": 3, 00:09:12.926 "num_base_bdevs_operational": 3, 00:09:12.926 "base_bdevs_list": [ 00:09:12.926 { 00:09:12.926 "name": "NewBaseBdev", 00:09:12.926 "uuid": "7091caa8-afc5-4e41-9884-a4bf074f0afd", 00:09:12.926 "is_configured": true, 00:09:12.926 "data_offset": 2048, 00:09:12.926 "data_size": 63488 00:09:12.926 }, 00:09:12.926 { 00:09:12.926 "name": "BaseBdev2", 00:09:12.926 "uuid": "487cd976-78b1-4bbb-bd9e-44333fe2e7ff", 00:09:12.926 "is_configured": true, 00:09:12.926 "data_offset": 2048, 00:09:12.926 "data_size": 63488 00:09:12.926 }, 00:09:12.926 { 00:09:12.926 "name": "BaseBdev3", 00:09:12.926 "uuid": "75a4ec5d-06ba-40a3-b197-4b7c4c691de1", 00:09:12.926 "is_configured": true, 00:09:12.926 "data_offset": 2048, 00:09:12.926 "data_size": 63488 00:09:12.926 } 00:09:12.926 ] 00:09:12.926 }' 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.926 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.494 [2024-11-28 02:24:46.909201] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.494 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.494 "name": "Existed_Raid", 00:09:13.494 "aliases": [ 00:09:13.494 "312ac554-8a7e-4d9f-a273-bb0989aadb2b" 00:09:13.494 ], 00:09:13.494 "product_name": "Raid Volume", 00:09:13.494 "block_size": 512, 00:09:13.494 "num_blocks": 63488, 00:09:13.494 "uuid": "312ac554-8a7e-4d9f-a273-bb0989aadb2b", 00:09:13.494 "assigned_rate_limits": { 00:09:13.494 "rw_ios_per_sec": 0, 00:09:13.494 "rw_mbytes_per_sec": 0, 00:09:13.494 "r_mbytes_per_sec": 0, 00:09:13.494 "w_mbytes_per_sec": 0 00:09:13.494 }, 00:09:13.494 "claimed": false, 00:09:13.494 "zoned": false, 00:09:13.494 "supported_io_types": { 00:09:13.494 "read": true, 00:09:13.494 "write": true, 00:09:13.494 "unmap": false, 00:09:13.494 "flush": false, 00:09:13.494 "reset": true, 00:09:13.494 "nvme_admin": false, 00:09:13.494 "nvme_io": false, 00:09:13.494 "nvme_io_md": false, 00:09:13.494 "write_zeroes": true, 00:09:13.494 "zcopy": false, 00:09:13.494 "get_zone_info": false, 00:09:13.494 "zone_management": false, 00:09:13.494 "zone_append": false, 00:09:13.494 "compare": false, 00:09:13.494 "compare_and_write": false, 00:09:13.494 "abort": false, 00:09:13.494 "seek_hole": false, 00:09:13.494 "seek_data": false, 00:09:13.494 "copy": false, 00:09:13.494 "nvme_iov_md": false 00:09:13.494 }, 00:09:13.494 "memory_domains": [ 00:09:13.494 { 00:09:13.494 "dma_device_id": "system", 00:09:13.494 "dma_device_type": 1 00:09:13.494 }, 00:09:13.494 { 00:09:13.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.494 "dma_device_type": 2 00:09:13.494 }, 00:09:13.494 { 00:09:13.494 "dma_device_id": "system", 00:09:13.494 "dma_device_type": 1 00:09:13.494 }, 00:09:13.494 { 00:09:13.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.494 "dma_device_type": 2 00:09:13.494 }, 00:09:13.494 { 00:09:13.494 "dma_device_id": "system", 00:09:13.494 "dma_device_type": 1 00:09:13.494 }, 00:09:13.494 { 00:09:13.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.494 "dma_device_type": 2 00:09:13.494 } 00:09:13.494 ], 00:09:13.494 "driver_specific": { 00:09:13.494 "raid": { 00:09:13.494 "uuid": "312ac554-8a7e-4d9f-a273-bb0989aadb2b", 00:09:13.494 "strip_size_kb": 0, 00:09:13.494 "state": "online", 00:09:13.494 "raid_level": "raid1", 00:09:13.494 "superblock": true, 00:09:13.494 "num_base_bdevs": 3, 00:09:13.494 "num_base_bdevs_discovered": 3, 00:09:13.494 "num_base_bdevs_operational": 3, 00:09:13.494 "base_bdevs_list": [ 00:09:13.494 { 00:09:13.494 "name": "NewBaseBdev", 00:09:13.494 "uuid": "7091caa8-afc5-4e41-9884-a4bf074f0afd", 00:09:13.494 "is_configured": true, 00:09:13.494 "data_offset": 2048, 00:09:13.494 "data_size": 63488 00:09:13.494 }, 00:09:13.494 { 00:09:13.494 "name": "BaseBdev2", 00:09:13.494 "uuid": "487cd976-78b1-4bbb-bd9e-44333fe2e7ff", 00:09:13.495 "is_configured": true, 00:09:13.495 "data_offset": 2048, 00:09:13.495 "data_size": 63488 00:09:13.495 }, 00:09:13.495 { 00:09:13.495 "name": "BaseBdev3", 00:09:13.495 "uuid": "75a4ec5d-06ba-40a3-b197-4b7c4c691de1", 00:09:13.495 "is_configured": true, 00:09:13.495 "data_offset": 2048, 00:09:13.495 "data_size": 63488 00:09:13.495 } 00:09:13.495 ] 00:09:13.495 } 00:09:13.495 } 00:09:13.495 }' 00:09:13.495 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.495 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:13.495 BaseBdev2 00:09:13.495 BaseBdev3' 00:09:13.495 02:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.495 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.754 [2024-11-28 02:24:47.184427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.754 [2024-11-28 02:24:47.184465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.754 [2024-11-28 02:24:47.184547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.754 [2024-11-28 02:24:47.184869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.754 [2024-11-28 02:24:47.184885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67822 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67822 ']' 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67822 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:13.754 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.755 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67822 00:09:13.755 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.755 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.755 killing process with pid 67822 00:09:13.755 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67822' 00:09:13.755 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67822 00:09:13.755 [2024-11-28 02:24:47.225531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.755 02:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67822 00:09:14.012 [2024-11-28 02:24:47.512504] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.949 02:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:14.949 00:09:14.949 real 0m10.521s 00:09:14.949 user 0m16.663s 00:09:14.949 sys 0m1.939s 00:09:14.949 02:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.949 02:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.949 ************************************ 00:09:14.949 END TEST raid_state_function_test_sb 00:09:14.949 ************************************ 00:09:15.208 02:24:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:15.208 02:24:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:15.208 02:24:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.208 02:24:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.208 ************************************ 00:09:15.208 START TEST raid_superblock_test 00:09:15.208 ************************************ 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68442 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68442 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68442 ']' 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.208 02:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.208 [2024-11-28 02:24:48.766959] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:15.208 [2024-11-28 02:24:48.767074] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68442 ] 00:09:15.467 [2024-11-28 02:24:48.942002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.467 [2024-11-28 02:24:49.051112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.726 [2024-11-28 02:24:49.251129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.726 [2024-11-28 02:24:49.251165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 malloc1 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 [2024-11-28 02:24:49.627111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:15.984 [2024-11-28 02:24:49.627164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.984 [2024-11-28 02:24:49.627185] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:15.984 [2024-11-28 02:24:49.627195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.984 [2024-11-28 02:24:49.629298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.984 [2024-11-28 02:24:49.629329] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:15.984 pt1 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.984 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.243 malloc2 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.243 [2024-11-28 02:24:49.682849] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.243 [2024-11-28 02:24:49.682898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.243 [2024-11-28 02:24:49.682932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:16.243 [2024-11-28 02:24:49.682941] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.243 [2024-11-28 02:24:49.685040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.243 [2024-11-28 02:24:49.685071] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.243 pt2 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.243 malloc3 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.243 [2024-11-28 02:24:49.751751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:16.243 [2024-11-28 02:24:49.751795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.243 [2024-11-28 02:24:49.751815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:16.243 [2024-11-28 02:24:49.751823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.243 [2024-11-28 02:24:49.753839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.243 [2024-11-28 02:24:49.753869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:16.243 pt3 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.243 [2024-11-28 02:24:49.763775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:16.243 [2024-11-28 02:24:49.765525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.243 [2024-11-28 02:24:49.765591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:16.243 [2024-11-28 02:24:49.765749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:16.243 [2024-11-28 02:24:49.765768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:16.243 [2024-11-28 02:24:49.765990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.243 [2024-11-28 02:24:49.766161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:16.243 [2024-11-28 02:24:49.766184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:16.243 [2024-11-28 02:24:49.766317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.243 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.244 "name": "raid_bdev1", 00:09:16.244 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:16.244 "strip_size_kb": 0, 00:09:16.244 "state": "online", 00:09:16.244 "raid_level": "raid1", 00:09:16.244 "superblock": true, 00:09:16.244 "num_base_bdevs": 3, 00:09:16.244 "num_base_bdevs_discovered": 3, 00:09:16.244 "num_base_bdevs_operational": 3, 00:09:16.244 "base_bdevs_list": [ 00:09:16.244 { 00:09:16.244 "name": "pt1", 00:09:16.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.244 "is_configured": true, 00:09:16.244 "data_offset": 2048, 00:09:16.244 "data_size": 63488 00:09:16.244 }, 00:09:16.244 { 00:09:16.244 "name": "pt2", 00:09:16.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.244 "is_configured": true, 00:09:16.244 "data_offset": 2048, 00:09:16.244 "data_size": 63488 00:09:16.244 }, 00:09:16.244 { 00:09:16.244 "name": "pt3", 00:09:16.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.244 "is_configured": true, 00:09:16.244 "data_offset": 2048, 00:09:16.244 "data_size": 63488 00:09:16.244 } 00:09:16.244 ] 00:09:16.244 }' 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.244 02:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.502 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:16.502 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:16.502 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.502 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.502 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.502 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.502 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.502 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.502 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.502 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.761 [2024-11-28 02:24:50.183396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.761 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.761 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.761 "name": "raid_bdev1", 00:09:16.761 "aliases": [ 00:09:16.761 "df89eb33-22fa-45be-8ec2-d8fb048bf100" 00:09:16.761 ], 00:09:16.761 "product_name": "Raid Volume", 00:09:16.761 "block_size": 512, 00:09:16.761 "num_blocks": 63488, 00:09:16.761 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:16.761 "assigned_rate_limits": { 00:09:16.761 "rw_ios_per_sec": 0, 00:09:16.761 "rw_mbytes_per_sec": 0, 00:09:16.761 "r_mbytes_per_sec": 0, 00:09:16.761 "w_mbytes_per_sec": 0 00:09:16.761 }, 00:09:16.761 "claimed": false, 00:09:16.761 "zoned": false, 00:09:16.761 "supported_io_types": { 00:09:16.761 "read": true, 00:09:16.761 "write": true, 00:09:16.761 "unmap": false, 00:09:16.761 "flush": false, 00:09:16.761 "reset": true, 00:09:16.761 "nvme_admin": false, 00:09:16.761 "nvme_io": false, 00:09:16.761 "nvme_io_md": false, 00:09:16.761 "write_zeroes": true, 00:09:16.761 "zcopy": false, 00:09:16.761 "get_zone_info": false, 00:09:16.761 "zone_management": false, 00:09:16.761 "zone_append": false, 00:09:16.761 "compare": false, 00:09:16.761 "compare_and_write": false, 00:09:16.761 "abort": false, 00:09:16.761 "seek_hole": false, 00:09:16.761 "seek_data": false, 00:09:16.761 "copy": false, 00:09:16.761 "nvme_iov_md": false 00:09:16.761 }, 00:09:16.761 "memory_domains": [ 00:09:16.761 { 00:09:16.761 "dma_device_id": "system", 00:09:16.761 "dma_device_type": 1 00:09:16.761 }, 00:09:16.761 { 00:09:16.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.761 "dma_device_type": 2 00:09:16.761 }, 00:09:16.761 { 00:09:16.761 "dma_device_id": "system", 00:09:16.761 "dma_device_type": 1 00:09:16.761 }, 00:09:16.761 { 00:09:16.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.761 "dma_device_type": 2 00:09:16.761 }, 00:09:16.761 { 00:09:16.761 "dma_device_id": "system", 00:09:16.761 "dma_device_type": 1 00:09:16.761 }, 00:09:16.761 { 00:09:16.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.762 "dma_device_type": 2 00:09:16.762 } 00:09:16.762 ], 00:09:16.762 "driver_specific": { 00:09:16.762 "raid": { 00:09:16.762 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:16.762 "strip_size_kb": 0, 00:09:16.762 "state": "online", 00:09:16.762 "raid_level": "raid1", 00:09:16.762 "superblock": true, 00:09:16.762 "num_base_bdevs": 3, 00:09:16.762 "num_base_bdevs_discovered": 3, 00:09:16.762 "num_base_bdevs_operational": 3, 00:09:16.762 "base_bdevs_list": [ 00:09:16.762 { 00:09:16.762 "name": "pt1", 00:09:16.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.762 "is_configured": true, 00:09:16.762 "data_offset": 2048, 00:09:16.762 "data_size": 63488 00:09:16.762 }, 00:09:16.762 { 00:09:16.762 "name": "pt2", 00:09:16.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.762 "is_configured": true, 00:09:16.762 "data_offset": 2048, 00:09:16.762 "data_size": 63488 00:09:16.762 }, 00:09:16.762 { 00:09:16.762 "name": "pt3", 00:09:16.762 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.762 "is_configured": true, 00:09:16.762 "data_offset": 2048, 00:09:16.762 "data_size": 63488 00:09:16.762 } 00:09:16.762 ] 00:09:16.762 } 00:09:16.762 } 00:09:16.762 }' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:16.762 pt2 00:09:16.762 pt3' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.762 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.762 [2024-11-28 02:24:50.430872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=df89eb33-22fa-45be-8ec2-d8fb048bf100 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z df89eb33-22fa-45be-8ec2-d8fb048bf100 ']' 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.022 [2024-11-28 02:24:50.462542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.022 [2024-11-28 02:24:50.462573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.022 [2024-11-28 02:24:50.462645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.022 [2024-11-28 02:24:50.462719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.022 [2024-11-28 02:24:50.462730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.022 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.023 [2024-11-28 02:24:50.602340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:17.023 [2024-11-28 02:24:50.604182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:17.023 [2024-11-28 02:24:50.604247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:17.023 [2024-11-28 02:24:50.604298] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:17.023 [2024-11-28 02:24:50.604342] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:17.023 [2024-11-28 02:24:50.604360] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:17.023 [2024-11-28 02:24:50.604375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.023 [2024-11-28 02:24:50.604386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:17.023 request: 00:09:17.023 { 00:09:17.023 "name": "raid_bdev1", 00:09:17.023 "raid_level": "raid1", 00:09:17.023 "base_bdevs": [ 00:09:17.023 "malloc1", 00:09:17.023 "malloc2", 00:09:17.023 "malloc3" 00:09:17.023 ], 00:09:17.023 "superblock": false, 00:09:17.023 "method": "bdev_raid_create", 00:09:17.023 "req_id": 1 00:09:17.023 } 00:09:17.023 Got JSON-RPC error response 00:09:17.023 response: 00:09:17.023 { 00:09:17.023 "code": -17, 00:09:17.023 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:17.023 } 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.023 [2024-11-28 02:24:50.666211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:17.023 [2024-11-28 02:24:50.666262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.023 [2024-11-28 02:24:50.666283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:17.023 [2024-11-28 02:24:50.666292] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.023 [2024-11-28 02:24:50.668487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.023 [2024-11-28 02:24:50.668518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:17.023 [2024-11-28 02:24:50.668597] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:17.023 [2024-11-28 02:24:50.668644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:17.023 pt1 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.023 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.282 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.282 "name": "raid_bdev1", 00:09:17.282 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:17.282 "strip_size_kb": 0, 00:09:17.282 "state": "configuring", 00:09:17.282 "raid_level": "raid1", 00:09:17.282 "superblock": true, 00:09:17.282 "num_base_bdevs": 3, 00:09:17.282 "num_base_bdevs_discovered": 1, 00:09:17.282 "num_base_bdevs_operational": 3, 00:09:17.282 "base_bdevs_list": [ 00:09:17.282 { 00:09:17.282 "name": "pt1", 00:09:17.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.282 "is_configured": true, 00:09:17.282 "data_offset": 2048, 00:09:17.282 "data_size": 63488 00:09:17.282 }, 00:09:17.282 { 00:09:17.282 "name": null, 00:09:17.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.282 "is_configured": false, 00:09:17.282 "data_offset": 2048, 00:09:17.282 "data_size": 63488 00:09:17.282 }, 00:09:17.282 { 00:09:17.282 "name": null, 00:09:17.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.282 "is_configured": false, 00:09:17.282 "data_offset": 2048, 00:09:17.282 "data_size": 63488 00:09:17.282 } 00:09:17.282 ] 00:09:17.282 }' 00:09:17.282 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.282 02:24:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.541 [2024-11-28 02:24:51.045566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.541 [2024-11-28 02:24:51.045640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.541 [2024-11-28 02:24:51.045664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:17.541 [2024-11-28 02:24:51.045675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.541 [2024-11-28 02:24:51.046159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.541 [2024-11-28 02:24:51.046178] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.541 [2024-11-28 02:24:51.046261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:17.541 [2024-11-28 02:24:51.046286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.541 pt2 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.541 [2024-11-28 02:24:51.057549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.541 "name": "raid_bdev1", 00:09:17.541 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:17.541 "strip_size_kb": 0, 00:09:17.541 "state": "configuring", 00:09:17.541 "raid_level": "raid1", 00:09:17.541 "superblock": true, 00:09:17.541 "num_base_bdevs": 3, 00:09:17.541 "num_base_bdevs_discovered": 1, 00:09:17.541 "num_base_bdevs_operational": 3, 00:09:17.541 "base_bdevs_list": [ 00:09:17.541 { 00:09:17.541 "name": "pt1", 00:09:17.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.541 "is_configured": true, 00:09:17.541 "data_offset": 2048, 00:09:17.541 "data_size": 63488 00:09:17.541 }, 00:09:17.541 { 00:09:17.541 "name": null, 00:09:17.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.541 "is_configured": false, 00:09:17.541 "data_offset": 0, 00:09:17.541 "data_size": 63488 00:09:17.541 }, 00:09:17.541 { 00:09:17.541 "name": null, 00:09:17.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.541 "is_configured": false, 00:09:17.541 "data_offset": 2048, 00:09:17.541 "data_size": 63488 00:09:17.541 } 00:09:17.541 ] 00:09:17.541 }' 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.541 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.109 [2024-11-28 02:24:51.492822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.109 [2024-11-28 02:24:51.492897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.109 [2024-11-28 02:24:51.492926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:18.109 [2024-11-28 02:24:51.492938] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.109 [2024-11-28 02:24:51.493373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.109 [2024-11-28 02:24:51.493391] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.109 [2024-11-28 02:24:51.493469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:18.109 [2024-11-28 02:24:51.493501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.109 pt2 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.109 [2024-11-28 02:24:51.504784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:18.109 [2024-11-28 02:24:51.504828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.109 [2024-11-28 02:24:51.504843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:18.109 [2024-11-28 02:24:51.504852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.109 [2024-11-28 02:24:51.505231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.109 [2024-11-28 02:24:51.505258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:18.109 [2024-11-28 02:24:51.505321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:18.109 [2024-11-28 02:24:51.505341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:18.109 [2024-11-28 02:24:51.505467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:18.109 [2024-11-28 02:24:51.505485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:18.109 [2024-11-28 02:24:51.505706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:18.109 [2024-11-28 02:24:51.505848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:18.109 [2024-11-28 02:24:51.505856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:18.109 [2024-11-28 02:24:51.506010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.109 pt3 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.109 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.110 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.110 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.110 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.110 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.110 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.110 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.110 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.110 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.110 "name": "raid_bdev1", 00:09:18.110 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:18.110 "strip_size_kb": 0, 00:09:18.110 "state": "online", 00:09:18.110 "raid_level": "raid1", 00:09:18.110 "superblock": true, 00:09:18.110 "num_base_bdevs": 3, 00:09:18.110 "num_base_bdevs_discovered": 3, 00:09:18.110 "num_base_bdevs_operational": 3, 00:09:18.110 "base_bdevs_list": [ 00:09:18.110 { 00:09:18.110 "name": "pt1", 00:09:18.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.110 "is_configured": true, 00:09:18.110 "data_offset": 2048, 00:09:18.110 "data_size": 63488 00:09:18.110 }, 00:09:18.110 { 00:09:18.110 "name": "pt2", 00:09:18.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.110 "is_configured": true, 00:09:18.110 "data_offset": 2048, 00:09:18.110 "data_size": 63488 00:09:18.110 }, 00:09:18.110 { 00:09:18.110 "name": "pt3", 00:09:18.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.110 "is_configured": true, 00:09:18.110 "data_offset": 2048, 00:09:18.110 "data_size": 63488 00:09:18.110 } 00:09:18.110 ] 00:09:18.110 }' 00:09:18.110 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.110 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.369 [2024-11-28 02:24:51.900453] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.369 "name": "raid_bdev1", 00:09:18.369 "aliases": [ 00:09:18.369 "df89eb33-22fa-45be-8ec2-d8fb048bf100" 00:09:18.369 ], 00:09:18.369 "product_name": "Raid Volume", 00:09:18.369 "block_size": 512, 00:09:18.369 "num_blocks": 63488, 00:09:18.369 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:18.369 "assigned_rate_limits": { 00:09:18.369 "rw_ios_per_sec": 0, 00:09:18.369 "rw_mbytes_per_sec": 0, 00:09:18.369 "r_mbytes_per_sec": 0, 00:09:18.369 "w_mbytes_per_sec": 0 00:09:18.369 }, 00:09:18.369 "claimed": false, 00:09:18.369 "zoned": false, 00:09:18.369 "supported_io_types": { 00:09:18.369 "read": true, 00:09:18.369 "write": true, 00:09:18.369 "unmap": false, 00:09:18.369 "flush": false, 00:09:18.369 "reset": true, 00:09:18.369 "nvme_admin": false, 00:09:18.369 "nvme_io": false, 00:09:18.369 "nvme_io_md": false, 00:09:18.369 "write_zeroes": true, 00:09:18.369 "zcopy": false, 00:09:18.369 "get_zone_info": false, 00:09:18.369 "zone_management": false, 00:09:18.369 "zone_append": false, 00:09:18.369 "compare": false, 00:09:18.369 "compare_and_write": false, 00:09:18.369 "abort": false, 00:09:18.369 "seek_hole": false, 00:09:18.369 "seek_data": false, 00:09:18.369 "copy": false, 00:09:18.369 "nvme_iov_md": false 00:09:18.369 }, 00:09:18.369 "memory_domains": [ 00:09:18.369 { 00:09:18.369 "dma_device_id": "system", 00:09:18.369 "dma_device_type": 1 00:09:18.369 }, 00:09:18.369 { 00:09:18.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.369 "dma_device_type": 2 00:09:18.369 }, 00:09:18.369 { 00:09:18.369 "dma_device_id": "system", 00:09:18.369 "dma_device_type": 1 00:09:18.369 }, 00:09:18.369 { 00:09:18.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.369 "dma_device_type": 2 00:09:18.369 }, 00:09:18.369 { 00:09:18.369 "dma_device_id": "system", 00:09:18.369 "dma_device_type": 1 00:09:18.369 }, 00:09:18.369 { 00:09:18.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.369 "dma_device_type": 2 00:09:18.369 } 00:09:18.369 ], 00:09:18.369 "driver_specific": { 00:09:18.369 "raid": { 00:09:18.369 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:18.369 "strip_size_kb": 0, 00:09:18.369 "state": "online", 00:09:18.369 "raid_level": "raid1", 00:09:18.369 "superblock": true, 00:09:18.369 "num_base_bdevs": 3, 00:09:18.369 "num_base_bdevs_discovered": 3, 00:09:18.369 "num_base_bdevs_operational": 3, 00:09:18.369 "base_bdevs_list": [ 00:09:18.369 { 00:09:18.369 "name": "pt1", 00:09:18.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.369 "is_configured": true, 00:09:18.369 "data_offset": 2048, 00:09:18.369 "data_size": 63488 00:09:18.369 }, 00:09:18.369 { 00:09:18.369 "name": "pt2", 00:09:18.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.369 "is_configured": true, 00:09:18.369 "data_offset": 2048, 00:09:18.369 "data_size": 63488 00:09:18.369 }, 00:09:18.369 { 00:09:18.369 "name": "pt3", 00:09:18.369 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.369 "is_configured": true, 00:09:18.369 "data_offset": 2048, 00:09:18.369 "data_size": 63488 00:09:18.369 } 00:09:18.369 ] 00:09:18.369 } 00:09:18.369 } 00:09:18.369 }' 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.369 pt2 00:09:18.369 pt3' 00:09:18.369 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.369 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.369 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.370 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:18.370 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.370 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.370 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.370 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.629 [2024-11-28 02:24:52.147975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' df89eb33-22fa-45be-8ec2-d8fb048bf100 '!=' df89eb33-22fa-45be-8ec2-d8fb048bf100 ']' 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.629 [2024-11-28 02:24:52.179669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.629 "name": "raid_bdev1", 00:09:18.629 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:18.629 "strip_size_kb": 0, 00:09:18.629 "state": "online", 00:09:18.629 "raid_level": "raid1", 00:09:18.629 "superblock": true, 00:09:18.629 "num_base_bdevs": 3, 00:09:18.629 "num_base_bdevs_discovered": 2, 00:09:18.629 "num_base_bdevs_operational": 2, 00:09:18.629 "base_bdevs_list": [ 00:09:18.629 { 00:09:18.629 "name": null, 00:09:18.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.629 "is_configured": false, 00:09:18.629 "data_offset": 0, 00:09:18.629 "data_size": 63488 00:09:18.629 }, 00:09:18.629 { 00:09:18.629 "name": "pt2", 00:09:18.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.629 "is_configured": true, 00:09:18.629 "data_offset": 2048, 00:09:18.629 "data_size": 63488 00:09:18.629 }, 00:09:18.629 { 00:09:18.629 "name": "pt3", 00:09:18.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.629 "is_configured": true, 00:09:18.629 "data_offset": 2048, 00:09:18.629 "data_size": 63488 00:09:18.629 } 00:09:18.629 ] 00:09:18.629 }' 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.629 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 [2024-11-28 02:24:52.595005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.198 [2024-11-28 02:24:52.595038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.198 [2024-11-28 02:24:52.595121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.198 [2024-11-28 02:24:52.595180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.198 [2024-11-28 02:24:52.595198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:19.198 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.199 [2024-11-28 02:24:52.662832] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.199 [2024-11-28 02:24:52.662883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.199 [2024-11-28 02:24:52.662900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:19.199 [2024-11-28 02:24:52.662910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.199 [2024-11-28 02:24:52.665180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.199 [2024-11-28 02:24:52.665215] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.199 [2024-11-28 02:24:52.665289] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.199 [2024-11-28 02:24:52.665342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.199 pt2 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.199 "name": "raid_bdev1", 00:09:19.199 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:19.199 "strip_size_kb": 0, 00:09:19.199 "state": "configuring", 00:09:19.199 "raid_level": "raid1", 00:09:19.199 "superblock": true, 00:09:19.199 "num_base_bdevs": 3, 00:09:19.199 "num_base_bdevs_discovered": 1, 00:09:19.199 "num_base_bdevs_operational": 2, 00:09:19.199 "base_bdevs_list": [ 00:09:19.199 { 00:09:19.199 "name": null, 00:09:19.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.199 "is_configured": false, 00:09:19.199 "data_offset": 2048, 00:09:19.199 "data_size": 63488 00:09:19.199 }, 00:09:19.199 { 00:09:19.199 "name": "pt2", 00:09:19.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.199 "is_configured": true, 00:09:19.199 "data_offset": 2048, 00:09:19.199 "data_size": 63488 00:09:19.199 }, 00:09:19.199 { 00:09:19.199 "name": null, 00:09:19.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.199 "is_configured": false, 00:09:19.199 "data_offset": 2048, 00:09:19.199 "data_size": 63488 00:09:19.199 } 00:09:19.199 ] 00:09:19.199 }' 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.199 02:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.458 [2024-11-28 02:24:53.106137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.458 [2024-11-28 02:24:53.106205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.458 [2024-11-28 02:24:53.106227] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:19.458 [2024-11-28 02:24:53.106238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.458 [2024-11-28 02:24:53.106704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.458 [2024-11-28 02:24:53.106731] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.458 [2024-11-28 02:24:53.106825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:19.458 [2024-11-28 02:24:53.106854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.458 [2024-11-28 02:24:53.106993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:19.458 [2024-11-28 02:24:53.107005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:19.458 [2024-11-28 02:24:53.107275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:19.458 [2024-11-28 02:24:53.107432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:19.458 [2024-11-28 02:24:53.107442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:19.458 [2024-11-28 02:24:53.107620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.458 pt3 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.458 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.717 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.717 "name": "raid_bdev1", 00:09:19.717 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:19.717 "strip_size_kb": 0, 00:09:19.717 "state": "online", 00:09:19.717 "raid_level": "raid1", 00:09:19.717 "superblock": true, 00:09:19.717 "num_base_bdevs": 3, 00:09:19.717 "num_base_bdevs_discovered": 2, 00:09:19.717 "num_base_bdevs_operational": 2, 00:09:19.717 "base_bdevs_list": [ 00:09:19.717 { 00:09:19.717 "name": null, 00:09:19.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.717 "is_configured": false, 00:09:19.717 "data_offset": 2048, 00:09:19.717 "data_size": 63488 00:09:19.717 }, 00:09:19.717 { 00:09:19.717 "name": "pt2", 00:09:19.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.717 "is_configured": true, 00:09:19.717 "data_offset": 2048, 00:09:19.717 "data_size": 63488 00:09:19.717 }, 00:09:19.717 { 00:09:19.717 "name": "pt3", 00:09:19.717 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.717 "is_configured": true, 00:09:19.717 "data_offset": 2048, 00:09:19.717 "data_size": 63488 00:09:19.717 } 00:09:19.717 ] 00:09:19.717 }' 00:09:19.717 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.717 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.976 [2024-11-28 02:24:53.541358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.976 [2024-11-28 02:24:53.541392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.976 [2024-11-28 02:24:53.541467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.976 [2024-11-28 02:24:53.541532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.976 [2024-11-28 02:24:53.541544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.976 [2024-11-28 02:24:53.609239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:19.976 [2024-11-28 02:24:53.609289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.976 [2024-11-28 02:24:53.609307] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:19.976 [2024-11-28 02:24:53.609315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.976 [2024-11-28 02:24:53.611428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.976 [2024-11-28 02:24:53.611457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:19.976 [2024-11-28 02:24:53.611539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:19.976 [2024-11-28 02:24:53.611581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:19.976 [2024-11-28 02:24:53.611725] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:19.976 [2024-11-28 02:24:53.611740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.976 [2024-11-28 02:24:53.611757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:19.976 [2024-11-28 02:24:53.611819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.976 pt1 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.976 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.977 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.977 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.977 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.235 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.235 "name": "raid_bdev1", 00:09:20.235 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:20.235 "strip_size_kb": 0, 00:09:20.235 "state": "configuring", 00:09:20.235 "raid_level": "raid1", 00:09:20.235 "superblock": true, 00:09:20.235 "num_base_bdevs": 3, 00:09:20.235 "num_base_bdevs_discovered": 1, 00:09:20.235 "num_base_bdevs_operational": 2, 00:09:20.235 "base_bdevs_list": [ 00:09:20.235 { 00:09:20.235 "name": null, 00:09:20.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.235 "is_configured": false, 00:09:20.236 "data_offset": 2048, 00:09:20.236 "data_size": 63488 00:09:20.236 }, 00:09:20.236 { 00:09:20.236 "name": "pt2", 00:09:20.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.236 "is_configured": true, 00:09:20.236 "data_offset": 2048, 00:09:20.236 "data_size": 63488 00:09:20.236 }, 00:09:20.236 { 00:09:20.236 "name": null, 00:09:20.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.236 "is_configured": false, 00:09:20.236 "data_offset": 2048, 00:09:20.236 "data_size": 63488 00:09:20.236 } 00:09:20.236 ] 00:09:20.236 }' 00:09:20.236 02:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.236 02:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.494 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:20.494 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:20.494 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.494 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.495 [2024-11-28 02:24:54.116376] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:20.495 [2024-11-28 02:24:54.116438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.495 [2024-11-28 02:24:54.116461] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:20.495 [2024-11-28 02:24:54.116470] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.495 [2024-11-28 02:24:54.116956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.495 [2024-11-28 02:24:54.116973] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:20.495 [2024-11-28 02:24:54.117053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:20.495 [2024-11-28 02:24:54.117075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:20.495 [2024-11-28 02:24:54.117195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:20.495 [2024-11-28 02:24:54.117204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:20.495 [2024-11-28 02:24:54.117433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:20.495 [2024-11-28 02:24:54.117582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:20.495 [2024-11-28 02:24:54.117596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:20.495 [2024-11-28 02:24:54.117728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.495 pt3 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.495 "name": "raid_bdev1", 00:09:20.495 "uuid": "df89eb33-22fa-45be-8ec2-d8fb048bf100", 00:09:20.495 "strip_size_kb": 0, 00:09:20.495 "state": "online", 00:09:20.495 "raid_level": "raid1", 00:09:20.495 "superblock": true, 00:09:20.495 "num_base_bdevs": 3, 00:09:20.495 "num_base_bdevs_discovered": 2, 00:09:20.495 "num_base_bdevs_operational": 2, 00:09:20.495 "base_bdevs_list": [ 00:09:20.495 { 00:09:20.495 "name": null, 00:09:20.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.495 "is_configured": false, 00:09:20.495 "data_offset": 2048, 00:09:20.495 "data_size": 63488 00:09:20.495 }, 00:09:20.495 { 00:09:20.495 "name": "pt2", 00:09:20.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.495 "is_configured": true, 00:09:20.495 "data_offset": 2048, 00:09:20.495 "data_size": 63488 00:09:20.495 }, 00:09:20.495 { 00:09:20.495 "name": "pt3", 00:09:20.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.495 "is_configured": true, 00:09:20.495 "data_offset": 2048, 00:09:20.495 "data_size": 63488 00:09:20.495 } 00:09:20.495 ] 00:09:20.495 }' 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.495 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.063 [2024-11-28 02:24:54.603890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' df89eb33-22fa-45be-8ec2-d8fb048bf100 '!=' df89eb33-22fa-45be-8ec2-d8fb048bf100 ']' 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68442 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68442 ']' 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68442 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68442 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.063 killing process with pid 68442 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68442' 00:09:21.063 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68442 00:09:21.063 [2024-11-28 02:24:54.671091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.063 [2024-11-28 02:24:54.671175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.064 [2024-11-28 02:24:54.671235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.064 [2024-11-28 02:24:54.671251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:21.064 02:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68442 00:09:21.322 [2024-11-28 02:24:54.961426] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.714 02:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:22.714 00:09:22.714 real 0m7.375s 00:09:22.714 user 0m11.514s 00:09:22.714 sys 0m1.346s 00:09:22.714 02:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.714 02:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.714 ************************************ 00:09:22.714 END TEST raid_superblock_test 00:09:22.714 ************************************ 00:09:22.714 02:24:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:22.714 02:24:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:22.714 02:24:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.714 02:24:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.714 ************************************ 00:09:22.714 START TEST raid_read_error_test 00:09:22.714 ************************************ 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kCcorkTVP1 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68888 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68888 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68888 ']' 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.714 02:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.714 [2024-11-28 02:24:56.223086] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:22.714 [2024-11-28 02:24:56.223198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68888 ] 00:09:22.972 [2024-11-28 02:24:56.396072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.972 [2024-11-28 02:24:56.504582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.244 [2024-11-28 02:24:56.688779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.244 [2024-11-28 02:24:56.688840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.501 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.501 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:23.501 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.501 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:23.501 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.501 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.501 BaseBdev1_malloc 00:09:23.501 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.502 true 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.502 [2024-11-28 02:24:57.110974] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:23.502 [2024-11-28 02:24:57.111045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.502 [2024-11-28 02:24:57.111067] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:23.502 [2024-11-28 02:24:57.111077] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.502 [2024-11-28 02:24:57.113123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.502 [2024-11-28 02:24:57.113156] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:23.502 BaseBdev1 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.502 BaseBdev2_malloc 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.502 true 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.502 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.502 [2024-11-28 02:24:57.178095] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:23.502 [2024-11-28 02:24:57.178160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.502 [2024-11-28 02:24:57.178176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:23.502 [2024-11-28 02:24:57.178186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.760 [2024-11-28 02:24:57.180215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.760 [2024-11-28 02:24:57.180250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:23.760 BaseBdev2 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.760 BaseBdev3_malloc 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.760 true 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.760 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.760 [2024-11-28 02:24:57.256827] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:23.761 [2024-11-28 02:24:57.256894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.761 [2024-11-28 02:24:57.256911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:23.761 [2024-11-28 02:24:57.256921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.761 [2024-11-28 02:24:57.258934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.761 [2024-11-28 02:24:57.258965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:23.761 BaseBdev3 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.761 [2024-11-28 02:24:57.268869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.761 [2024-11-28 02:24:57.270605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.761 [2024-11-28 02:24:57.270694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.761 [2024-11-28 02:24:57.270884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:23.761 [2024-11-28 02:24:57.270913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:23.761 [2024-11-28 02:24:57.271152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:23.761 [2024-11-28 02:24:57.271328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:23.761 [2024-11-28 02:24:57.271343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:23.761 [2024-11-28 02:24:57.271479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.761 "name": "raid_bdev1", 00:09:23.761 "uuid": "8f2481a6-994c-4394-93e9-2f4c86fbec9b", 00:09:23.761 "strip_size_kb": 0, 00:09:23.761 "state": "online", 00:09:23.761 "raid_level": "raid1", 00:09:23.761 "superblock": true, 00:09:23.761 "num_base_bdevs": 3, 00:09:23.761 "num_base_bdevs_discovered": 3, 00:09:23.761 "num_base_bdevs_operational": 3, 00:09:23.761 "base_bdevs_list": [ 00:09:23.761 { 00:09:23.761 "name": "BaseBdev1", 00:09:23.761 "uuid": "0b7558b5-578c-5c7b-a44d-ccbe7e7f2f5d", 00:09:23.761 "is_configured": true, 00:09:23.761 "data_offset": 2048, 00:09:23.761 "data_size": 63488 00:09:23.761 }, 00:09:23.761 { 00:09:23.761 "name": "BaseBdev2", 00:09:23.761 "uuid": "e18d96c6-e9a3-5ab6-8294-d89994419f07", 00:09:23.761 "is_configured": true, 00:09:23.761 "data_offset": 2048, 00:09:23.761 "data_size": 63488 00:09:23.761 }, 00:09:23.761 { 00:09:23.761 "name": "BaseBdev3", 00:09:23.761 "uuid": "d151bb32-a6b1-5796-80e3-62dbb5ab9111", 00:09:23.761 "is_configured": true, 00:09:23.761 "data_offset": 2048, 00:09:23.761 "data_size": 63488 00:09:23.761 } 00:09:23.761 ] 00:09:23.761 }' 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.761 02:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.325 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:24.325 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:24.325 [2024-11-28 02:24:57.801257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.261 "name": "raid_bdev1", 00:09:25.261 "uuid": "8f2481a6-994c-4394-93e9-2f4c86fbec9b", 00:09:25.261 "strip_size_kb": 0, 00:09:25.261 "state": "online", 00:09:25.261 "raid_level": "raid1", 00:09:25.261 "superblock": true, 00:09:25.261 "num_base_bdevs": 3, 00:09:25.261 "num_base_bdevs_discovered": 3, 00:09:25.261 "num_base_bdevs_operational": 3, 00:09:25.261 "base_bdevs_list": [ 00:09:25.261 { 00:09:25.261 "name": "BaseBdev1", 00:09:25.261 "uuid": "0b7558b5-578c-5c7b-a44d-ccbe7e7f2f5d", 00:09:25.261 "is_configured": true, 00:09:25.261 "data_offset": 2048, 00:09:25.261 "data_size": 63488 00:09:25.261 }, 00:09:25.261 { 00:09:25.261 "name": "BaseBdev2", 00:09:25.261 "uuid": "e18d96c6-e9a3-5ab6-8294-d89994419f07", 00:09:25.261 "is_configured": true, 00:09:25.261 "data_offset": 2048, 00:09:25.261 "data_size": 63488 00:09:25.261 }, 00:09:25.261 { 00:09:25.261 "name": "BaseBdev3", 00:09:25.261 "uuid": "d151bb32-a6b1-5796-80e3-62dbb5ab9111", 00:09:25.261 "is_configured": true, 00:09:25.261 "data_offset": 2048, 00:09:25.261 "data_size": 63488 00:09:25.261 } 00:09:25.261 ] 00:09:25.261 }' 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.261 02:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.521 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.521 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.521 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.521 [2024-11-28 02:24:59.173212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.521 [2024-11-28 02:24:59.173246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.521 [2024-11-28 02:24:59.175883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.521 [2024-11-28 02:24:59.175943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.521 [2024-11-28 02:24:59.176045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.521 [2024-11-28 02:24:59.176060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:25.521 { 00:09:25.521 "results": [ 00:09:25.521 { 00:09:25.521 "job": "raid_bdev1", 00:09:25.521 "core_mask": "0x1", 00:09:25.521 "workload": "randrw", 00:09:25.521 "percentage": 50, 00:09:25.521 "status": "finished", 00:09:25.521 "queue_depth": 1, 00:09:25.521 "io_size": 131072, 00:09:25.521 "runtime": 1.372977, 00:09:25.521 "iops": 13666.65282812458, 00:09:25.521 "mibps": 1708.3316035155724, 00:09:25.521 "io_failed": 0, 00:09:25.521 "io_timeout": 0, 00:09:25.521 "avg_latency_us": 70.5646560960829, 00:09:25.521 "min_latency_us": 23.252401746724892, 00:09:25.521 "max_latency_us": 1423.7624454148472 00:09:25.521 } 00:09:25.521 ], 00:09:25.521 "core_count": 1 00:09:25.521 } 00:09:25.521 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.521 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68888 00:09:25.521 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68888 ']' 00:09:25.521 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68888 00:09:25.521 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:25.521 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.521 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68888 00:09:25.780 killing process with pid 68888 00:09:25.780 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.780 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.780 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68888' 00:09:25.780 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68888 00:09:25.780 [2024-11-28 02:24:59.222359] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.780 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68888 00:09:25.780 [2024-11-28 02:24:59.444844] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.156 02:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kCcorkTVP1 00:09:27.156 02:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:27.156 02:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.156 02:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:27.156 02:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:27.156 02:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.156 02:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:27.156 02:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:27.156 00:09:27.156 real 0m4.476s 00:09:27.156 user 0m5.318s 00:09:27.156 sys 0m0.561s 00:09:27.156 02:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.156 02:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.156 ************************************ 00:09:27.156 END TEST raid_read_error_test 00:09:27.156 ************************************ 00:09:27.156 02:25:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:27.156 02:25:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:27.156 02:25:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.156 02:25:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.156 ************************************ 00:09:27.156 START TEST raid_write_error_test 00:09:27.156 ************************************ 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:27.156 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YBc5I9L6CH 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69028 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69028 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69028 ']' 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.157 02:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.157 [2024-11-28 02:25:00.770867] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:27.157 [2024-11-28 02:25:00.771007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69028 ] 00:09:27.415 [2024-11-28 02:25:00.944738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.415 [2024-11-28 02:25:01.054941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.674 [2024-11-28 02:25:01.248245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.674 [2024-11-28 02:25:01.248280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.934 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.934 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:27.934 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.934 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:27.934 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.934 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.195 BaseBdev1_malloc 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.195 true 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.195 [2024-11-28 02:25:01.651288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:28.195 [2024-11-28 02:25:01.651336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.195 [2024-11-28 02:25:01.651355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:28.195 [2024-11-28 02:25:01.651365] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.195 [2024-11-28 02:25:01.653360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.195 [2024-11-28 02:25:01.653395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:28.195 BaseBdev1 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:28.195 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.196 BaseBdev2_malloc 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.196 true 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.196 [2024-11-28 02:25:01.717613] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:28.196 [2024-11-28 02:25:01.717659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.196 [2024-11-28 02:25:01.717674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:28.196 [2024-11-28 02:25:01.717683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.196 [2024-11-28 02:25:01.719714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.196 [2024-11-28 02:25:01.719750] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:28.196 BaseBdev2 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.196 BaseBdev3_malloc 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.196 true 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.196 [2024-11-28 02:25:01.813319] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:28.196 [2024-11-28 02:25:01.813364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.196 [2024-11-28 02:25:01.813380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:28.196 [2024-11-28 02:25:01.813390] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.196 [2024-11-28 02:25:01.815339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.196 [2024-11-28 02:25:01.815374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:28.196 BaseBdev3 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.196 [2024-11-28 02:25:01.825365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.196 [2024-11-28 02:25:01.827036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.196 [2024-11-28 02:25:01.827105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.196 [2024-11-28 02:25:01.827290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:28.196 [2024-11-28 02:25:01.827303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:28.196 [2024-11-28 02:25:01.827528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:28.196 [2024-11-28 02:25:01.827703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:28.196 [2024-11-28 02:25:01.827721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:28.196 [2024-11-28 02:25:01.827861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.196 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.455 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.455 "name": "raid_bdev1", 00:09:28.455 "uuid": "40e2351d-96ff-427d-8f37-60b1ca1bc33e", 00:09:28.455 "strip_size_kb": 0, 00:09:28.455 "state": "online", 00:09:28.455 "raid_level": "raid1", 00:09:28.455 "superblock": true, 00:09:28.455 "num_base_bdevs": 3, 00:09:28.455 "num_base_bdevs_discovered": 3, 00:09:28.455 "num_base_bdevs_operational": 3, 00:09:28.455 "base_bdevs_list": [ 00:09:28.455 { 00:09:28.455 "name": "BaseBdev1", 00:09:28.455 "uuid": "3cc6ef2a-eddd-58ce-9b0b-1c8d31fc5a39", 00:09:28.455 "is_configured": true, 00:09:28.455 "data_offset": 2048, 00:09:28.455 "data_size": 63488 00:09:28.455 }, 00:09:28.455 { 00:09:28.455 "name": "BaseBdev2", 00:09:28.455 "uuid": "c1d0829f-b15b-55a6-8b7e-af2df77cc076", 00:09:28.455 "is_configured": true, 00:09:28.455 "data_offset": 2048, 00:09:28.455 "data_size": 63488 00:09:28.455 }, 00:09:28.455 { 00:09:28.455 "name": "BaseBdev3", 00:09:28.455 "uuid": "def28c54-f885-5664-8741-453a6343f5dc", 00:09:28.455 "is_configured": true, 00:09:28.455 "data_offset": 2048, 00:09:28.455 "data_size": 63488 00:09:28.455 } 00:09:28.455 ] 00:09:28.455 }' 00:09:28.455 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.455 02:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.715 02:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.715 02:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:28.715 [2024-11-28 02:25:02.361733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.656 [2024-11-28 02:25:03.300733] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:29.656 [2024-11-28 02:25:03.300784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.656 [2024-11-28 02:25:03.301010] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.656 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.916 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.917 "name": "raid_bdev1", 00:09:29.917 "uuid": "40e2351d-96ff-427d-8f37-60b1ca1bc33e", 00:09:29.917 "strip_size_kb": 0, 00:09:29.917 "state": "online", 00:09:29.917 "raid_level": "raid1", 00:09:29.917 "superblock": true, 00:09:29.917 "num_base_bdevs": 3, 00:09:29.917 "num_base_bdevs_discovered": 2, 00:09:29.917 "num_base_bdevs_operational": 2, 00:09:29.917 "base_bdevs_list": [ 00:09:29.917 { 00:09:29.917 "name": null, 00:09:29.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.917 "is_configured": false, 00:09:29.917 "data_offset": 0, 00:09:29.917 "data_size": 63488 00:09:29.917 }, 00:09:29.917 { 00:09:29.917 "name": "BaseBdev2", 00:09:29.917 "uuid": "c1d0829f-b15b-55a6-8b7e-af2df77cc076", 00:09:29.917 "is_configured": true, 00:09:29.917 "data_offset": 2048, 00:09:29.917 "data_size": 63488 00:09:29.917 }, 00:09:29.917 { 00:09:29.917 "name": "BaseBdev3", 00:09:29.917 "uuid": "def28c54-f885-5664-8741-453a6343f5dc", 00:09:29.917 "is_configured": true, 00:09:29.917 "data_offset": 2048, 00:09:29.917 "data_size": 63488 00:09:29.917 } 00:09:29.917 ] 00:09:29.917 }' 00:09:29.917 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.917 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.177 [2024-11-28 02:25:03.751143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.177 [2024-11-28 02:25:03.751180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.177 [2024-11-28 02:25:03.753754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.177 [2024-11-28 02:25:03.753819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.177 [2024-11-28 02:25:03.753901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.177 [2024-11-28 02:25:03.753952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:30.177 { 00:09:30.177 "results": [ 00:09:30.177 { 00:09:30.177 "job": "raid_bdev1", 00:09:30.177 "core_mask": "0x1", 00:09:30.177 "workload": "randrw", 00:09:30.177 "percentage": 50, 00:09:30.177 "status": "finished", 00:09:30.177 "queue_depth": 1, 00:09:30.177 "io_size": 131072, 00:09:30.177 "runtime": 1.390386, 00:09:30.177 "iops": 14995.116464061059, 00:09:30.177 "mibps": 1874.3895580076323, 00:09:30.177 "io_failed": 0, 00:09:30.177 "io_timeout": 0, 00:09:30.177 "avg_latency_us": 64.06788710086522, 00:09:30.177 "min_latency_us": 22.69344978165939, 00:09:30.177 "max_latency_us": 1445.2262008733624 00:09:30.177 } 00:09:30.177 ], 00:09:30.177 "core_count": 1 00:09:30.177 } 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69028 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69028 ']' 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69028 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69028 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.177 killing process with pid 69028 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69028' 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69028 00:09:30.177 [2024-11-28 02:25:03.798748] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.177 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69028 00:09:30.437 [2024-11-28 02:25:04.019827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.847 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:31.847 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YBc5I9L6CH 00:09:31.847 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:31.847 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:31.847 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:31.847 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.847 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:31.847 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:31.847 00:09:31.847 real 0m4.483s 00:09:31.847 user 0m5.327s 00:09:31.847 sys 0m0.561s 00:09:31.847 02:25:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.847 02:25:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.847 ************************************ 00:09:31.847 END TEST raid_write_error_test 00:09:31.847 ************************************ 00:09:31.847 02:25:05 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:31.847 02:25:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:31.847 02:25:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:31.847 02:25:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.847 02:25:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.847 02:25:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.847 ************************************ 00:09:31.847 START TEST raid_state_function_test 00:09:31.847 ************************************ 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69166 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69166' 00:09:31.847 Process raid pid: 69166 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69166 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69166 ']' 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.847 02:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.847 [2024-11-28 02:25:05.317389] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:31.847 [2024-11-28 02:25:05.317493] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.847 [2024-11-28 02:25:05.493606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.108 [2024-11-28 02:25:05.650448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.367 [2024-11-28 02:25:05.891344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.367 [2024-11-28 02:25:05.891400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 [2024-11-28 02:25:06.149908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.628 [2024-11-28 02:25:06.150010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.628 [2024-11-28 02:25:06.150021] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.628 [2024-11-28 02:25:06.150032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.628 [2024-11-28 02:25:06.150038] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.628 [2024-11-28 02:25:06.150047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.628 [2024-11-28 02:25:06.150053] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:32.628 [2024-11-28 02:25:06.150063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.628 "name": "Existed_Raid", 00:09:32.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.628 "strip_size_kb": 64, 00:09:32.628 "state": "configuring", 00:09:32.628 "raid_level": "raid0", 00:09:32.628 "superblock": false, 00:09:32.628 "num_base_bdevs": 4, 00:09:32.628 "num_base_bdevs_discovered": 0, 00:09:32.628 "num_base_bdevs_operational": 4, 00:09:32.628 "base_bdevs_list": [ 00:09:32.628 { 00:09:32.628 "name": "BaseBdev1", 00:09:32.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.628 "is_configured": false, 00:09:32.628 "data_offset": 0, 00:09:32.628 "data_size": 0 00:09:32.628 }, 00:09:32.628 { 00:09:32.628 "name": "BaseBdev2", 00:09:32.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.628 "is_configured": false, 00:09:32.628 "data_offset": 0, 00:09:32.628 "data_size": 0 00:09:32.628 }, 00:09:32.628 { 00:09:32.628 "name": "BaseBdev3", 00:09:32.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.628 "is_configured": false, 00:09:32.628 "data_offset": 0, 00:09:32.628 "data_size": 0 00:09:32.628 }, 00:09:32.628 { 00:09:32.628 "name": "BaseBdev4", 00:09:32.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.628 "is_configured": false, 00:09:32.628 "data_offset": 0, 00:09:32.628 "data_size": 0 00:09:32.628 } 00:09:32.628 ] 00:09:32.628 }' 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.628 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.198 [2024-11-28 02:25:06.617064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.198 [2024-11-28 02:25:06.617127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.198 [2024-11-28 02:25:06.629054] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.198 [2024-11-28 02:25:06.629125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.198 [2024-11-28 02:25:06.629136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.198 [2024-11-28 02:25:06.629146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.198 [2024-11-28 02:25:06.629153] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.198 [2024-11-28 02:25:06.629162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.198 [2024-11-28 02:25:06.629169] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:33.198 [2024-11-28 02:25:06.629179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.198 [2024-11-28 02:25:06.680729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.198 BaseBdev1 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.198 [ 00:09:33.198 { 00:09:33.198 "name": "BaseBdev1", 00:09:33.198 "aliases": [ 00:09:33.198 "a81d7268-d395-4c4a-a515-8333431b972f" 00:09:33.198 ], 00:09:33.198 "product_name": "Malloc disk", 00:09:33.198 "block_size": 512, 00:09:33.198 "num_blocks": 65536, 00:09:33.198 "uuid": "a81d7268-d395-4c4a-a515-8333431b972f", 00:09:33.198 "assigned_rate_limits": { 00:09:33.198 "rw_ios_per_sec": 0, 00:09:33.198 "rw_mbytes_per_sec": 0, 00:09:33.198 "r_mbytes_per_sec": 0, 00:09:33.198 "w_mbytes_per_sec": 0 00:09:33.198 }, 00:09:33.198 "claimed": true, 00:09:33.198 "claim_type": "exclusive_write", 00:09:33.198 "zoned": false, 00:09:33.198 "supported_io_types": { 00:09:33.198 "read": true, 00:09:33.198 "write": true, 00:09:33.198 "unmap": true, 00:09:33.198 "flush": true, 00:09:33.198 "reset": true, 00:09:33.198 "nvme_admin": false, 00:09:33.198 "nvme_io": false, 00:09:33.198 "nvme_io_md": false, 00:09:33.198 "write_zeroes": true, 00:09:33.198 "zcopy": true, 00:09:33.198 "get_zone_info": false, 00:09:33.198 "zone_management": false, 00:09:33.198 "zone_append": false, 00:09:33.198 "compare": false, 00:09:33.198 "compare_and_write": false, 00:09:33.198 "abort": true, 00:09:33.198 "seek_hole": false, 00:09:33.198 "seek_data": false, 00:09:33.198 "copy": true, 00:09:33.198 "nvme_iov_md": false 00:09:33.198 }, 00:09:33.198 "memory_domains": [ 00:09:33.198 { 00:09:33.198 "dma_device_id": "system", 00:09:33.198 "dma_device_type": 1 00:09:33.198 }, 00:09:33.198 { 00:09:33.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.198 "dma_device_type": 2 00:09:33.198 } 00:09:33.198 ], 00:09:33.198 "driver_specific": {} 00:09:33.198 } 00:09:33.198 ] 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.198 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.199 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.199 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.199 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.199 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.199 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.199 "name": "Existed_Raid", 00:09:33.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.199 "strip_size_kb": 64, 00:09:33.199 "state": "configuring", 00:09:33.199 "raid_level": "raid0", 00:09:33.199 "superblock": false, 00:09:33.199 "num_base_bdevs": 4, 00:09:33.199 "num_base_bdevs_discovered": 1, 00:09:33.199 "num_base_bdevs_operational": 4, 00:09:33.199 "base_bdevs_list": [ 00:09:33.199 { 00:09:33.199 "name": "BaseBdev1", 00:09:33.199 "uuid": "a81d7268-d395-4c4a-a515-8333431b972f", 00:09:33.199 "is_configured": true, 00:09:33.199 "data_offset": 0, 00:09:33.199 "data_size": 65536 00:09:33.199 }, 00:09:33.199 { 00:09:33.199 "name": "BaseBdev2", 00:09:33.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.199 "is_configured": false, 00:09:33.199 "data_offset": 0, 00:09:33.199 "data_size": 0 00:09:33.199 }, 00:09:33.199 { 00:09:33.199 "name": "BaseBdev3", 00:09:33.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.199 "is_configured": false, 00:09:33.199 "data_offset": 0, 00:09:33.199 "data_size": 0 00:09:33.199 }, 00:09:33.199 { 00:09:33.199 "name": "BaseBdev4", 00:09:33.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.199 "is_configured": false, 00:09:33.199 "data_offset": 0, 00:09:33.199 "data_size": 0 00:09:33.199 } 00:09:33.199 ] 00:09:33.199 }' 00:09:33.199 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.199 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.767 [2024-11-28 02:25:07.171999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.767 [2024-11-28 02:25:07.172082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.767 [2024-11-28 02:25:07.183993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.767 [2024-11-28 02:25:07.186194] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.767 [2024-11-28 02:25:07.186324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.767 [2024-11-28 02:25:07.186339] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.767 [2024-11-28 02:25:07.186351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.767 [2024-11-28 02:25:07.186358] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:33.767 [2024-11-28 02:25:07.186366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.767 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.768 "name": "Existed_Raid", 00:09:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.768 "strip_size_kb": 64, 00:09:33.768 "state": "configuring", 00:09:33.768 "raid_level": "raid0", 00:09:33.768 "superblock": false, 00:09:33.768 "num_base_bdevs": 4, 00:09:33.768 "num_base_bdevs_discovered": 1, 00:09:33.768 "num_base_bdevs_operational": 4, 00:09:33.768 "base_bdevs_list": [ 00:09:33.768 { 00:09:33.768 "name": "BaseBdev1", 00:09:33.768 "uuid": "a81d7268-d395-4c4a-a515-8333431b972f", 00:09:33.768 "is_configured": true, 00:09:33.768 "data_offset": 0, 00:09:33.768 "data_size": 65536 00:09:33.768 }, 00:09:33.768 { 00:09:33.768 "name": "BaseBdev2", 00:09:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.768 "is_configured": false, 00:09:33.768 "data_offset": 0, 00:09:33.768 "data_size": 0 00:09:33.768 }, 00:09:33.768 { 00:09:33.768 "name": "BaseBdev3", 00:09:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.768 "is_configured": false, 00:09:33.768 "data_offset": 0, 00:09:33.768 "data_size": 0 00:09:33.768 }, 00:09:33.768 { 00:09:33.768 "name": "BaseBdev4", 00:09:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.768 "is_configured": false, 00:09:33.768 "data_offset": 0, 00:09:33.768 "data_size": 0 00:09:33.768 } 00:09:33.768 ] 00:09:33.768 }' 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.768 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.028 [2024-11-28 02:25:07.638724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.028 BaseBdev2 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.028 [ 00:09:34.028 { 00:09:34.028 "name": "BaseBdev2", 00:09:34.028 "aliases": [ 00:09:34.028 "298a960c-159d-4aad-8c38-df053ce01066" 00:09:34.028 ], 00:09:34.028 "product_name": "Malloc disk", 00:09:34.028 "block_size": 512, 00:09:34.028 "num_blocks": 65536, 00:09:34.028 "uuid": "298a960c-159d-4aad-8c38-df053ce01066", 00:09:34.028 "assigned_rate_limits": { 00:09:34.028 "rw_ios_per_sec": 0, 00:09:34.028 "rw_mbytes_per_sec": 0, 00:09:34.028 "r_mbytes_per_sec": 0, 00:09:34.028 "w_mbytes_per_sec": 0 00:09:34.028 }, 00:09:34.028 "claimed": true, 00:09:34.028 "claim_type": "exclusive_write", 00:09:34.028 "zoned": false, 00:09:34.028 "supported_io_types": { 00:09:34.028 "read": true, 00:09:34.028 "write": true, 00:09:34.028 "unmap": true, 00:09:34.028 "flush": true, 00:09:34.028 "reset": true, 00:09:34.028 "nvme_admin": false, 00:09:34.028 "nvme_io": false, 00:09:34.028 "nvme_io_md": false, 00:09:34.028 "write_zeroes": true, 00:09:34.028 "zcopy": true, 00:09:34.028 "get_zone_info": false, 00:09:34.028 "zone_management": false, 00:09:34.028 "zone_append": false, 00:09:34.028 "compare": false, 00:09:34.028 "compare_and_write": false, 00:09:34.028 "abort": true, 00:09:34.028 "seek_hole": false, 00:09:34.028 "seek_data": false, 00:09:34.028 "copy": true, 00:09:34.028 "nvme_iov_md": false 00:09:34.028 }, 00:09:34.028 "memory_domains": [ 00:09:34.028 { 00:09:34.028 "dma_device_id": "system", 00:09:34.028 "dma_device_type": 1 00:09:34.028 }, 00:09:34.028 { 00:09:34.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.028 "dma_device_type": 2 00:09:34.028 } 00:09:34.028 ], 00:09:34.028 "driver_specific": {} 00:09:34.028 } 00:09:34.028 ] 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.028 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.288 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.288 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.288 "name": "Existed_Raid", 00:09:34.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.288 "strip_size_kb": 64, 00:09:34.288 "state": "configuring", 00:09:34.288 "raid_level": "raid0", 00:09:34.288 "superblock": false, 00:09:34.288 "num_base_bdevs": 4, 00:09:34.288 "num_base_bdevs_discovered": 2, 00:09:34.288 "num_base_bdevs_operational": 4, 00:09:34.288 "base_bdevs_list": [ 00:09:34.288 { 00:09:34.288 "name": "BaseBdev1", 00:09:34.288 "uuid": "a81d7268-d395-4c4a-a515-8333431b972f", 00:09:34.288 "is_configured": true, 00:09:34.288 "data_offset": 0, 00:09:34.288 "data_size": 65536 00:09:34.288 }, 00:09:34.288 { 00:09:34.288 "name": "BaseBdev2", 00:09:34.288 "uuid": "298a960c-159d-4aad-8c38-df053ce01066", 00:09:34.288 "is_configured": true, 00:09:34.288 "data_offset": 0, 00:09:34.288 "data_size": 65536 00:09:34.288 }, 00:09:34.288 { 00:09:34.288 "name": "BaseBdev3", 00:09:34.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.288 "is_configured": false, 00:09:34.288 "data_offset": 0, 00:09:34.288 "data_size": 0 00:09:34.288 }, 00:09:34.288 { 00:09:34.288 "name": "BaseBdev4", 00:09:34.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.288 "is_configured": false, 00:09:34.288 "data_offset": 0, 00:09:34.288 "data_size": 0 00:09:34.288 } 00:09:34.288 ] 00:09:34.288 }' 00:09:34.288 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.288 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.548 [2024-11-28 02:25:08.131177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.548 BaseBdev3 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.548 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.548 [ 00:09:34.548 { 00:09:34.548 "name": "BaseBdev3", 00:09:34.548 "aliases": [ 00:09:34.548 "62720eaa-3c4b-4832-b303-f31518316b8c" 00:09:34.548 ], 00:09:34.548 "product_name": "Malloc disk", 00:09:34.548 "block_size": 512, 00:09:34.548 "num_blocks": 65536, 00:09:34.548 "uuid": "62720eaa-3c4b-4832-b303-f31518316b8c", 00:09:34.548 "assigned_rate_limits": { 00:09:34.548 "rw_ios_per_sec": 0, 00:09:34.548 "rw_mbytes_per_sec": 0, 00:09:34.548 "r_mbytes_per_sec": 0, 00:09:34.548 "w_mbytes_per_sec": 0 00:09:34.548 }, 00:09:34.548 "claimed": true, 00:09:34.548 "claim_type": "exclusive_write", 00:09:34.548 "zoned": false, 00:09:34.548 "supported_io_types": { 00:09:34.549 "read": true, 00:09:34.549 "write": true, 00:09:34.549 "unmap": true, 00:09:34.549 "flush": true, 00:09:34.549 "reset": true, 00:09:34.549 "nvme_admin": false, 00:09:34.549 "nvme_io": false, 00:09:34.549 "nvme_io_md": false, 00:09:34.549 "write_zeroes": true, 00:09:34.549 "zcopy": true, 00:09:34.549 "get_zone_info": false, 00:09:34.549 "zone_management": false, 00:09:34.549 "zone_append": false, 00:09:34.549 "compare": false, 00:09:34.549 "compare_and_write": false, 00:09:34.549 "abort": true, 00:09:34.549 "seek_hole": false, 00:09:34.549 "seek_data": false, 00:09:34.549 "copy": true, 00:09:34.549 "nvme_iov_md": false 00:09:34.549 }, 00:09:34.549 "memory_domains": [ 00:09:34.549 { 00:09:34.549 "dma_device_id": "system", 00:09:34.549 "dma_device_type": 1 00:09:34.549 }, 00:09:34.549 { 00:09:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.549 "dma_device_type": 2 00:09:34.549 } 00:09:34.549 ], 00:09:34.549 "driver_specific": {} 00:09:34.549 } 00:09:34.549 ] 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.549 "name": "Existed_Raid", 00:09:34.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.549 "strip_size_kb": 64, 00:09:34.549 "state": "configuring", 00:09:34.549 "raid_level": "raid0", 00:09:34.549 "superblock": false, 00:09:34.549 "num_base_bdevs": 4, 00:09:34.549 "num_base_bdevs_discovered": 3, 00:09:34.549 "num_base_bdevs_operational": 4, 00:09:34.549 "base_bdevs_list": [ 00:09:34.549 { 00:09:34.549 "name": "BaseBdev1", 00:09:34.549 "uuid": "a81d7268-d395-4c4a-a515-8333431b972f", 00:09:34.549 "is_configured": true, 00:09:34.549 "data_offset": 0, 00:09:34.549 "data_size": 65536 00:09:34.549 }, 00:09:34.549 { 00:09:34.549 "name": "BaseBdev2", 00:09:34.549 "uuid": "298a960c-159d-4aad-8c38-df053ce01066", 00:09:34.549 "is_configured": true, 00:09:34.549 "data_offset": 0, 00:09:34.549 "data_size": 65536 00:09:34.549 }, 00:09:34.549 { 00:09:34.549 "name": "BaseBdev3", 00:09:34.549 "uuid": "62720eaa-3c4b-4832-b303-f31518316b8c", 00:09:34.549 "is_configured": true, 00:09:34.549 "data_offset": 0, 00:09:34.549 "data_size": 65536 00:09:34.549 }, 00:09:34.549 { 00:09:34.549 "name": "BaseBdev4", 00:09:34.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.549 "is_configured": false, 00:09:34.549 "data_offset": 0, 00:09:34.549 "data_size": 0 00:09:34.549 } 00:09:34.549 ] 00:09:34.549 }' 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.549 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.118 [2024-11-28 02:25:08.631665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:35.118 [2024-11-28 02:25:08.631814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.118 [2024-11-28 02:25:08.631843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:35.118 [2024-11-28 02:25:08.632195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:35.118 [2024-11-28 02:25:08.632437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.118 [2024-11-28 02:25:08.632482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.118 [2024-11-28 02:25:08.632814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.118 BaseBdev4 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.118 [ 00:09:35.118 { 00:09:35.118 "name": "BaseBdev4", 00:09:35.118 "aliases": [ 00:09:35.118 "352f7219-00ae-4a1d-9e87-dc4ec3b29964" 00:09:35.118 ], 00:09:35.118 "product_name": "Malloc disk", 00:09:35.118 "block_size": 512, 00:09:35.118 "num_blocks": 65536, 00:09:35.118 "uuid": "352f7219-00ae-4a1d-9e87-dc4ec3b29964", 00:09:35.118 "assigned_rate_limits": { 00:09:35.118 "rw_ios_per_sec": 0, 00:09:35.118 "rw_mbytes_per_sec": 0, 00:09:35.118 "r_mbytes_per_sec": 0, 00:09:35.118 "w_mbytes_per_sec": 0 00:09:35.118 }, 00:09:35.118 "claimed": true, 00:09:35.118 "claim_type": "exclusive_write", 00:09:35.118 "zoned": false, 00:09:35.118 "supported_io_types": { 00:09:35.118 "read": true, 00:09:35.118 "write": true, 00:09:35.118 "unmap": true, 00:09:35.118 "flush": true, 00:09:35.118 "reset": true, 00:09:35.118 "nvme_admin": false, 00:09:35.118 "nvme_io": false, 00:09:35.118 "nvme_io_md": false, 00:09:35.118 "write_zeroes": true, 00:09:35.118 "zcopy": true, 00:09:35.118 "get_zone_info": false, 00:09:35.118 "zone_management": false, 00:09:35.118 "zone_append": false, 00:09:35.118 "compare": false, 00:09:35.118 "compare_and_write": false, 00:09:35.118 "abort": true, 00:09:35.118 "seek_hole": false, 00:09:35.118 "seek_data": false, 00:09:35.118 "copy": true, 00:09:35.118 "nvme_iov_md": false 00:09:35.118 }, 00:09:35.118 "memory_domains": [ 00:09:35.118 { 00:09:35.118 "dma_device_id": "system", 00:09:35.118 "dma_device_type": 1 00:09:35.118 }, 00:09:35.118 { 00:09:35.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.118 "dma_device_type": 2 00:09:35.118 } 00:09:35.118 ], 00:09:35.118 "driver_specific": {} 00:09:35.118 } 00:09:35.118 ] 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.118 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.119 "name": "Existed_Raid", 00:09:35.119 "uuid": "721de0ed-20a8-44f0-bd1a-04e966769175", 00:09:35.119 "strip_size_kb": 64, 00:09:35.119 "state": "online", 00:09:35.119 "raid_level": "raid0", 00:09:35.119 "superblock": false, 00:09:35.119 "num_base_bdevs": 4, 00:09:35.119 "num_base_bdevs_discovered": 4, 00:09:35.119 "num_base_bdevs_operational": 4, 00:09:35.119 "base_bdevs_list": [ 00:09:35.119 { 00:09:35.119 "name": "BaseBdev1", 00:09:35.119 "uuid": "a81d7268-d395-4c4a-a515-8333431b972f", 00:09:35.119 "is_configured": true, 00:09:35.119 "data_offset": 0, 00:09:35.119 "data_size": 65536 00:09:35.119 }, 00:09:35.119 { 00:09:35.119 "name": "BaseBdev2", 00:09:35.119 "uuid": "298a960c-159d-4aad-8c38-df053ce01066", 00:09:35.119 "is_configured": true, 00:09:35.119 "data_offset": 0, 00:09:35.119 "data_size": 65536 00:09:35.119 }, 00:09:35.119 { 00:09:35.119 "name": "BaseBdev3", 00:09:35.119 "uuid": "62720eaa-3c4b-4832-b303-f31518316b8c", 00:09:35.119 "is_configured": true, 00:09:35.119 "data_offset": 0, 00:09:35.119 "data_size": 65536 00:09:35.119 }, 00:09:35.119 { 00:09:35.119 "name": "BaseBdev4", 00:09:35.119 "uuid": "352f7219-00ae-4a1d-9e87-dc4ec3b29964", 00:09:35.119 "is_configured": true, 00:09:35.119 "data_offset": 0, 00:09:35.119 "data_size": 65536 00:09:35.119 } 00:09:35.119 ] 00:09:35.119 }' 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.119 02:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.689 [2024-11-28 02:25:09.091573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.689 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.689 "name": "Existed_Raid", 00:09:35.689 "aliases": [ 00:09:35.689 "721de0ed-20a8-44f0-bd1a-04e966769175" 00:09:35.689 ], 00:09:35.689 "product_name": "Raid Volume", 00:09:35.689 "block_size": 512, 00:09:35.689 "num_blocks": 262144, 00:09:35.689 "uuid": "721de0ed-20a8-44f0-bd1a-04e966769175", 00:09:35.689 "assigned_rate_limits": { 00:09:35.689 "rw_ios_per_sec": 0, 00:09:35.689 "rw_mbytes_per_sec": 0, 00:09:35.689 "r_mbytes_per_sec": 0, 00:09:35.689 "w_mbytes_per_sec": 0 00:09:35.689 }, 00:09:35.689 "claimed": false, 00:09:35.689 "zoned": false, 00:09:35.689 "supported_io_types": { 00:09:35.689 "read": true, 00:09:35.689 "write": true, 00:09:35.689 "unmap": true, 00:09:35.689 "flush": true, 00:09:35.689 "reset": true, 00:09:35.689 "nvme_admin": false, 00:09:35.689 "nvme_io": false, 00:09:35.689 "nvme_io_md": false, 00:09:35.689 "write_zeroes": true, 00:09:35.689 "zcopy": false, 00:09:35.690 "get_zone_info": false, 00:09:35.690 "zone_management": false, 00:09:35.690 "zone_append": false, 00:09:35.690 "compare": false, 00:09:35.690 "compare_and_write": false, 00:09:35.690 "abort": false, 00:09:35.690 "seek_hole": false, 00:09:35.690 "seek_data": false, 00:09:35.690 "copy": false, 00:09:35.690 "nvme_iov_md": false 00:09:35.690 }, 00:09:35.690 "memory_domains": [ 00:09:35.690 { 00:09:35.690 "dma_device_id": "system", 00:09:35.690 "dma_device_type": 1 00:09:35.690 }, 00:09:35.690 { 00:09:35.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.690 "dma_device_type": 2 00:09:35.690 }, 00:09:35.690 { 00:09:35.690 "dma_device_id": "system", 00:09:35.690 "dma_device_type": 1 00:09:35.690 }, 00:09:35.690 { 00:09:35.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.690 "dma_device_type": 2 00:09:35.690 }, 00:09:35.690 { 00:09:35.690 "dma_device_id": "system", 00:09:35.690 "dma_device_type": 1 00:09:35.690 }, 00:09:35.690 { 00:09:35.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.690 "dma_device_type": 2 00:09:35.690 }, 00:09:35.690 { 00:09:35.690 "dma_device_id": "system", 00:09:35.690 "dma_device_type": 1 00:09:35.690 }, 00:09:35.690 { 00:09:35.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.690 "dma_device_type": 2 00:09:35.690 } 00:09:35.690 ], 00:09:35.690 "driver_specific": { 00:09:35.690 "raid": { 00:09:35.690 "uuid": "721de0ed-20a8-44f0-bd1a-04e966769175", 00:09:35.690 "strip_size_kb": 64, 00:09:35.690 "state": "online", 00:09:35.690 "raid_level": "raid0", 00:09:35.690 "superblock": false, 00:09:35.690 "num_base_bdevs": 4, 00:09:35.690 "num_base_bdevs_discovered": 4, 00:09:35.690 "num_base_bdevs_operational": 4, 00:09:35.690 "base_bdevs_list": [ 00:09:35.690 { 00:09:35.690 "name": "BaseBdev1", 00:09:35.690 "uuid": "a81d7268-d395-4c4a-a515-8333431b972f", 00:09:35.690 "is_configured": true, 00:09:35.690 "data_offset": 0, 00:09:35.690 "data_size": 65536 00:09:35.690 }, 00:09:35.690 { 00:09:35.690 "name": "BaseBdev2", 00:09:35.690 "uuid": "298a960c-159d-4aad-8c38-df053ce01066", 00:09:35.690 "is_configured": true, 00:09:35.690 "data_offset": 0, 00:09:35.690 "data_size": 65536 00:09:35.690 }, 00:09:35.690 { 00:09:35.690 "name": "BaseBdev3", 00:09:35.690 "uuid": "62720eaa-3c4b-4832-b303-f31518316b8c", 00:09:35.690 "is_configured": true, 00:09:35.690 "data_offset": 0, 00:09:35.690 "data_size": 65536 00:09:35.690 }, 00:09:35.690 { 00:09:35.690 "name": "BaseBdev4", 00:09:35.690 "uuid": "352f7219-00ae-4a1d-9e87-dc4ec3b29964", 00:09:35.690 "is_configured": true, 00:09:35.690 "data_offset": 0, 00:09:35.690 "data_size": 65536 00:09:35.690 } 00:09:35.690 ] 00:09:35.690 } 00:09:35.690 } 00:09:35.690 }' 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.690 BaseBdev2 00:09:35.690 BaseBdev3 00:09:35.690 BaseBdev4' 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.690 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.950 [2024-11-28 02:25:09.410617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.950 [2024-11-28 02:25:09.410660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.950 [2024-11-28 02:25:09.410718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.950 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.951 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.951 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.951 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.951 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.951 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.951 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.951 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.951 "name": "Existed_Raid", 00:09:35.951 "uuid": "721de0ed-20a8-44f0-bd1a-04e966769175", 00:09:35.951 "strip_size_kb": 64, 00:09:35.951 "state": "offline", 00:09:35.951 "raid_level": "raid0", 00:09:35.951 "superblock": false, 00:09:35.951 "num_base_bdevs": 4, 00:09:35.951 "num_base_bdevs_discovered": 3, 00:09:35.951 "num_base_bdevs_operational": 3, 00:09:35.951 "base_bdevs_list": [ 00:09:35.951 { 00:09:35.951 "name": null, 00:09:35.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.951 "is_configured": false, 00:09:35.951 "data_offset": 0, 00:09:35.951 "data_size": 65536 00:09:35.951 }, 00:09:35.951 { 00:09:35.951 "name": "BaseBdev2", 00:09:35.951 "uuid": "298a960c-159d-4aad-8c38-df053ce01066", 00:09:35.951 "is_configured": true, 00:09:35.951 "data_offset": 0, 00:09:35.951 "data_size": 65536 00:09:35.951 }, 00:09:35.951 { 00:09:35.951 "name": "BaseBdev3", 00:09:35.951 "uuid": "62720eaa-3c4b-4832-b303-f31518316b8c", 00:09:35.951 "is_configured": true, 00:09:35.951 "data_offset": 0, 00:09:35.951 "data_size": 65536 00:09:35.951 }, 00:09:35.951 { 00:09:35.951 "name": "BaseBdev4", 00:09:35.951 "uuid": "352f7219-00ae-4a1d-9e87-dc4ec3b29964", 00:09:35.951 "is_configured": true, 00:09:35.951 "data_offset": 0, 00:09:35.951 "data_size": 65536 00:09:35.951 } 00:09:35.951 ] 00:09:35.951 }' 00:09:35.951 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.951 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.520 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.520 [2024-11-28 02:25:09.965452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.520 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.521 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.521 [2024-11-28 02:25:10.130576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.780 [2024-11-28 02:25:10.291242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:36.780 [2024-11-28 02:25:10.291364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.780 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.781 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.781 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:36.781 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.781 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.781 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.781 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.781 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.041 BaseBdev2 00:09:37.041 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.041 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:37.041 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:37.041 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.041 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.041 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.041 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.041 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.042 [ 00:09:37.042 { 00:09:37.042 "name": "BaseBdev2", 00:09:37.042 "aliases": [ 00:09:37.042 "d22874fb-7bd9-4040-b653-cda9ad4a2c33" 00:09:37.042 ], 00:09:37.042 "product_name": "Malloc disk", 00:09:37.042 "block_size": 512, 00:09:37.042 "num_blocks": 65536, 00:09:37.042 "uuid": "d22874fb-7bd9-4040-b653-cda9ad4a2c33", 00:09:37.042 "assigned_rate_limits": { 00:09:37.042 "rw_ios_per_sec": 0, 00:09:37.042 "rw_mbytes_per_sec": 0, 00:09:37.042 "r_mbytes_per_sec": 0, 00:09:37.042 "w_mbytes_per_sec": 0 00:09:37.042 }, 00:09:37.042 "claimed": false, 00:09:37.042 "zoned": false, 00:09:37.042 "supported_io_types": { 00:09:37.042 "read": true, 00:09:37.042 "write": true, 00:09:37.042 "unmap": true, 00:09:37.042 "flush": true, 00:09:37.042 "reset": true, 00:09:37.042 "nvme_admin": false, 00:09:37.042 "nvme_io": false, 00:09:37.042 "nvme_io_md": false, 00:09:37.042 "write_zeroes": true, 00:09:37.042 "zcopy": true, 00:09:37.042 "get_zone_info": false, 00:09:37.042 "zone_management": false, 00:09:37.042 "zone_append": false, 00:09:37.042 "compare": false, 00:09:37.042 "compare_and_write": false, 00:09:37.042 "abort": true, 00:09:37.042 "seek_hole": false, 00:09:37.042 "seek_data": false, 00:09:37.042 "copy": true, 00:09:37.042 "nvme_iov_md": false 00:09:37.042 }, 00:09:37.042 "memory_domains": [ 00:09:37.042 { 00:09:37.042 "dma_device_id": "system", 00:09:37.042 "dma_device_type": 1 00:09:37.042 }, 00:09:37.042 { 00:09:37.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.042 "dma_device_type": 2 00:09:37.042 } 00:09:37.042 ], 00:09:37.042 "driver_specific": {} 00:09:37.042 } 00:09:37.042 ] 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.042 BaseBdev3 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.042 [ 00:09:37.042 { 00:09:37.042 "name": "BaseBdev3", 00:09:37.042 "aliases": [ 00:09:37.042 "364f95f6-b4ec-46aa-bc06-8883f64fb47b" 00:09:37.042 ], 00:09:37.042 "product_name": "Malloc disk", 00:09:37.042 "block_size": 512, 00:09:37.042 "num_blocks": 65536, 00:09:37.042 "uuid": "364f95f6-b4ec-46aa-bc06-8883f64fb47b", 00:09:37.042 "assigned_rate_limits": { 00:09:37.042 "rw_ios_per_sec": 0, 00:09:37.042 "rw_mbytes_per_sec": 0, 00:09:37.042 "r_mbytes_per_sec": 0, 00:09:37.042 "w_mbytes_per_sec": 0 00:09:37.042 }, 00:09:37.042 "claimed": false, 00:09:37.042 "zoned": false, 00:09:37.042 "supported_io_types": { 00:09:37.042 "read": true, 00:09:37.042 "write": true, 00:09:37.042 "unmap": true, 00:09:37.042 "flush": true, 00:09:37.042 "reset": true, 00:09:37.042 "nvme_admin": false, 00:09:37.042 "nvme_io": false, 00:09:37.042 "nvme_io_md": false, 00:09:37.042 "write_zeroes": true, 00:09:37.042 "zcopy": true, 00:09:37.042 "get_zone_info": false, 00:09:37.042 "zone_management": false, 00:09:37.042 "zone_append": false, 00:09:37.042 "compare": false, 00:09:37.042 "compare_and_write": false, 00:09:37.042 "abort": true, 00:09:37.042 "seek_hole": false, 00:09:37.042 "seek_data": false, 00:09:37.042 "copy": true, 00:09:37.042 "nvme_iov_md": false 00:09:37.042 }, 00:09:37.042 "memory_domains": [ 00:09:37.042 { 00:09:37.042 "dma_device_id": "system", 00:09:37.042 "dma_device_type": 1 00:09:37.042 }, 00:09:37.042 { 00:09:37.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.042 "dma_device_type": 2 00:09:37.042 } 00:09:37.042 ], 00:09:37.042 "driver_specific": {} 00:09:37.042 } 00:09:37.042 ] 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.042 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.043 BaseBdev4 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.043 [ 00:09:37.043 { 00:09:37.043 "name": "BaseBdev4", 00:09:37.043 "aliases": [ 00:09:37.043 "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6" 00:09:37.043 ], 00:09:37.043 "product_name": "Malloc disk", 00:09:37.043 "block_size": 512, 00:09:37.043 "num_blocks": 65536, 00:09:37.043 "uuid": "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6", 00:09:37.043 "assigned_rate_limits": { 00:09:37.043 "rw_ios_per_sec": 0, 00:09:37.043 "rw_mbytes_per_sec": 0, 00:09:37.043 "r_mbytes_per_sec": 0, 00:09:37.043 "w_mbytes_per_sec": 0 00:09:37.043 }, 00:09:37.043 "claimed": false, 00:09:37.043 "zoned": false, 00:09:37.043 "supported_io_types": { 00:09:37.043 "read": true, 00:09:37.043 "write": true, 00:09:37.043 "unmap": true, 00:09:37.043 "flush": true, 00:09:37.043 "reset": true, 00:09:37.043 "nvme_admin": false, 00:09:37.043 "nvme_io": false, 00:09:37.043 "nvme_io_md": false, 00:09:37.043 "write_zeroes": true, 00:09:37.043 "zcopy": true, 00:09:37.043 "get_zone_info": false, 00:09:37.043 "zone_management": false, 00:09:37.043 "zone_append": false, 00:09:37.043 "compare": false, 00:09:37.043 "compare_and_write": false, 00:09:37.043 "abort": true, 00:09:37.043 "seek_hole": false, 00:09:37.043 "seek_data": false, 00:09:37.043 "copy": true, 00:09:37.043 "nvme_iov_md": false 00:09:37.043 }, 00:09:37.043 "memory_domains": [ 00:09:37.043 { 00:09:37.043 "dma_device_id": "system", 00:09:37.043 "dma_device_type": 1 00:09:37.043 }, 00:09:37.043 { 00:09:37.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.043 "dma_device_type": 2 00:09:37.043 } 00:09:37.043 ], 00:09:37.043 "driver_specific": {} 00:09:37.043 } 00:09:37.043 ] 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.043 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.043 [2024-11-28 02:25:10.713424] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.043 [2024-11-28 02:25:10.713514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.043 [2024-11-28 02:25:10.713539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.043 [2024-11-28 02:25:10.715630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.043 [2024-11-28 02:25:10.715685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.303 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.303 "name": "Existed_Raid", 00:09:37.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.303 "strip_size_kb": 64, 00:09:37.303 "state": "configuring", 00:09:37.303 "raid_level": "raid0", 00:09:37.303 "superblock": false, 00:09:37.304 "num_base_bdevs": 4, 00:09:37.304 "num_base_bdevs_discovered": 3, 00:09:37.304 "num_base_bdevs_operational": 4, 00:09:37.304 "base_bdevs_list": [ 00:09:37.304 { 00:09:37.304 "name": "BaseBdev1", 00:09:37.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.304 "is_configured": false, 00:09:37.304 "data_offset": 0, 00:09:37.304 "data_size": 0 00:09:37.304 }, 00:09:37.304 { 00:09:37.304 "name": "BaseBdev2", 00:09:37.304 "uuid": "d22874fb-7bd9-4040-b653-cda9ad4a2c33", 00:09:37.304 "is_configured": true, 00:09:37.304 "data_offset": 0, 00:09:37.304 "data_size": 65536 00:09:37.304 }, 00:09:37.304 { 00:09:37.304 "name": "BaseBdev3", 00:09:37.304 "uuid": "364f95f6-b4ec-46aa-bc06-8883f64fb47b", 00:09:37.304 "is_configured": true, 00:09:37.304 "data_offset": 0, 00:09:37.304 "data_size": 65536 00:09:37.304 }, 00:09:37.304 { 00:09:37.304 "name": "BaseBdev4", 00:09:37.304 "uuid": "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6", 00:09:37.304 "is_configured": true, 00:09:37.304 "data_offset": 0, 00:09:37.304 "data_size": 65536 00:09:37.304 } 00:09:37.304 ] 00:09:37.304 }' 00:09:37.304 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.304 02:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.563 [2024-11-28 02:25:11.212674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.563 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.822 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.822 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.822 "name": "Existed_Raid", 00:09:37.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.822 "strip_size_kb": 64, 00:09:37.822 "state": "configuring", 00:09:37.822 "raid_level": "raid0", 00:09:37.822 "superblock": false, 00:09:37.822 "num_base_bdevs": 4, 00:09:37.822 "num_base_bdevs_discovered": 2, 00:09:37.822 "num_base_bdevs_operational": 4, 00:09:37.822 "base_bdevs_list": [ 00:09:37.822 { 00:09:37.822 "name": "BaseBdev1", 00:09:37.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.822 "is_configured": false, 00:09:37.822 "data_offset": 0, 00:09:37.822 "data_size": 0 00:09:37.822 }, 00:09:37.822 { 00:09:37.822 "name": null, 00:09:37.822 "uuid": "d22874fb-7bd9-4040-b653-cda9ad4a2c33", 00:09:37.822 "is_configured": false, 00:09:37.822 "data_offset": 0, 00:09:37.822 "data_size": 65536 00:09:37.822 }, 00:09:37.822 { 00:09:37.822 "name": "BaseBdev3", 00:09:37.822 "uuid": "364f95f6-b4ec-46aa-bc06-8883f64fb47b", 00:09:37.822 "is_configured": true, 00:09:37.822 "data_offset": 0, 00:09:37.822 "data_size": 65536 00:09:37.822 }, 00:09:37.822 { 00:09:37.822 "name": "BaseBdev4", 00:09:37.822 "uuid": "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6", 00:09:37.822 "is_configured": true, 00:09:37.822 "data_offset": 0, 00:09:37.822 "data_size": 65536 00:09:37.822 } 00:09:37.822 ] 00:09:37.822 }' 00:09:37.822 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.822 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.082 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.082 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.082 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.082 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.082 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.082 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:38.082 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.082 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.082 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.342 [2024-11-28 02:25:11.763042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.342 BaseBdev1 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.342 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.342 [ 00:09:38.342 { 00:09:38.343 "name": "BaseBdev1", 00:09:38.343 "aliases": [ 00:09:38.343 "a23a102e-bde0-4a68-8fb0-8f23204047c1" 00:09:38.343 ], 00:09:38.343 "product_name": "Malloc disk", 00:09:38.343 "block_size": 512, 00:09:38.343 "num_blocks": 65536, 00:09:38.343 "uuid": "a23a102e-bde0-4a68-8fb0-8f23204047c1", 00:09:38.343 "assigned_rate_limits": { 00:09:38.343 "rw_ios_per_sec": 0, 00:09:38.343 "rw_mbytes_per_sec": 0, 00:09:38.343 "r_mbytes_per_sec": 0, 00:09:38.343 "w_mbytes_per_sec": 0 00:09:38.343 }, 00:09:38.343 "claimed": true, 00:09:38.343 "claim_type": "exclusive_write", 00:09:38.343 "zoned": false, 00:09:38.343 "supported_io_types": { 00:09:38.343 "read": true, 00:09:38.343 "write": true, 00:09:38.343 "unmap": true, 00:09:38.343 "flush": true, 00:09:38.343 "reset": true, 00:09:38.343 "nvme_admin": false, 00:09:38.343 "nvme_io": false, 00:09:38.343 "nvme_io_md": false, 00:09:38.343 "write_zeroes": true, 00:09:38.343 "zcopy": true, 00:09:38.343 "get_zone_info": false, 00:09:38.343 "zone_management": false, 00:09:38.343 "zone_append": false, 00:09:38.343 "compare": false, 00:09:38.343 "compare_and_write": false, 00:09:38.343 "abort": true, 00:09:38.343 "seek_hole": false, 00:09:38.343 "seek_data": false, 00:09:38.343 "copy": true, 00:09:38.343 "nvme_iov_md": false 00:09:38.343 }, 00:09:38.343 "memory_domains": [ 00:09:38.343 { 00:09:38.343 "dma_device_id": "system", 00:09:38.343 "dma_device_type": 1 00:09:38.343 }, 00:09:38.343 { 00:09:38.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.343 "dma_device_type": 2 00:09:38.343 } 00:09:38.343 ], 00:09:38.343 "driver_specific": {} 00:09:38.343 } 00:09:38.343 ] 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.343 "name": "Existed_Raid", 00:09:38.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.343 "strip_size_kb": 64, 00:09:38.343 "state": "configuring", 00:09:38.343 "raid_level": "raid0", 00:09:38.343 "superblock": false, 00:09:38.343 "num_base_bdevs": 4, 00:09:38.343 "num_base_bdevs_discovered": 3, 00:09:38.343 "num_base_bdevs_operational": 4, 00:09:38.343 "base_bdevs_list": [ 00:09:38.343 { 00:09:38.343 "name": "BaseBdev1", 00:09:38.343 "uuid": "a23a102e-bde0-4a68-8fb0-8f23204047c1", 00:09:38.343 "is_configured": true, 00:09:38.343 "data_offset": 0, 00:09:38.343 "data_size": 65536 00:09:38.343 }, 00:09:38.343 { 00:09:38.343 "name": null, 00:09:38.343 "uuid": "d22874fb-7bd9-4040-b653-cda9ad4a2c33", 00:09:38.343 "is_configured": false, 00:09:38.343 "data_offset": 0, 00:09:38.343 "data_size": 65536 00:09:38.343 }, 00:09:38.343 { 00:09:38.343 "name": "BaseBdev3", 00:09:38.343 "uuid": "364f95f6-b4ec-46aa-bc06-8883f64fb47b", 00:09:38.343 "is_configured": true, 00:09:38.343 "data_offset": 0, 00:09:38.343 "data_size": 65536 00:09:38.343 }, 00:09:38.343 { 00:09:38.343 "name": "BaseBdev4", 00:09:38.343 "uuid": "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6", 00:09:38.343 "is_configured": true, 00:09:38.343 "data_offset": 0, 00:09:38.343 "data_size": 65536 00:09:38.343 } 00:09:38.343 ] 00:09:38.343 }' 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.343 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.604 [2024-11-28 02:25:12.258380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.604 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.863 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.863 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.863 "name": "Existed_Raid", 00:09:38.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.863 "strip_size_kb": 64, 00:09:38.863 "state": "configuring", 00:09:38.863 "raid_level": "raid0", 00:09:38.863 "superblock": false, 00:09:38.863 "num_base_bdevs": 4, 00:09:38.863 "num_base_bdevs_discovered": 2, 00:09:38.863 "num_base_bdevs_operational": 4, 00:09:38.863 "base_bdevs_list": [ 00:09:38.863 { 00:09:38.863 "name": "BaseBdev1", 00:09:38.863 "uuid": "a23a102e-bde0-4a68-8fb0-8f23204047c1", 00:09:38.863 "is_configured": true, 00:09:38.863 "data_offset": 0, 00:09:38.863 "data_size": 65536 00:09:38.863 }, 00:09:38.863 { 00:09:38.863 "name": null, 00:09:38.863 "uuid": "d22874fb-7bd9-4040-b653-cda9ad4a2c33", 00:09:38.863 "is_configured": false, 00:09:38.863 "data_offset": 0, 00:09:38.863 "data_size": 65536 00:09:38.863 }, 00:09:38.863 { 00:09:38.863 "name": null, 00:09:38.863 "uuid": "364f95f6-b4ec-46aa-bc06-8883f64fb47b", 00:09:38.863 "is_configured": false, 00:09:38.863 "data_offset": 0, 00:09:38.863 "data_size": 65536 00:09:38.863 }, 00:09:38.863 { 00:09:38.863 "name": "BaseBdev4", 00:09:38.863 "uuid": "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6", 00:09:38.863 "is_configured": true, 00:09:38.863 "data_offset": 0, 00:09:38.863 "data_size": 65536 00:09:38.863 } 00:09:38.863 ] 00:09:38.863 }' 00:09:38.863 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.863 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.123 [2024-11-28 02:25:12.697592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.123 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.123 "name": "Existed_Raid", 00:09:39.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.123 "strip_size_kb": 64, 00:09:39.123 "state": "configuring", 00:09:39.123 "raid_level": "raid0", 00:09:39.123 "superblock": false, 00:09:39.123 "num_base_bdevs": 4, 00:09:39.123 "num_base_bdevs_discovered": 3, 00:09:39.123 "num_base_bdevs_operational": 4, 00:09:39.123 "base_bdevs_list": [ 00:09:39.123 { 00:09:39.123 "name": "BaseBdev1", 00:09:39.123 "uuid": "a23a102e-bde0-4a68-8fb0-8f23204047c1", 00:09:39.123 "is_configured": true, 00:09:39.123 "data_offset": 0, 00:09:39.123 "data_size": 65536 00:09:39.123 }, 00:09:39.123 { 00:09:39.123 "name": null, 00:09:39.123 "uuid": "d22874fb-7bd9-4040-b653-cda9ad4a2c33", 00:09:39.123 "is_configured": false, 00:09:39.123 "data_offset": 0, 00:09:39.123 "data_size": 65536 00:09:39.123 }, 00:09:39.123 { 00:09:39.123 "name": "BaseBdev3", 00:09:39.123 "uuid": "364f95f6-b4ec-46aa-bc06-8883f64fb47b", 00:09:39.123 "is_configured": true, 00:09:39.123 "data_offset": 0, 00:09:39.123 "data_size": 65536 00:09:39.123 }, 00:09:39.123 { 00:09:39.123 "name": "BaseBdev4", 00:09:39.123 "uuid": "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6", 00:09:39.123 "is_configured": true, 00:09:39.123 "data_offset": 0, 00:09:39.123 "data_size": 65536 00:09:39.123 } 00:09:39.123 ] 00:09:39.123 }' 00:09:39.124 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.124 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.694 [2024-11-28 02:25:13.220810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.694 "name": "Existed_Raid", 00:09:39.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.694 "strip_size_kb": 64, 00:09:39.694 "state": "configuring", 00:09:39.694 "raid_level": "raid0", 00:09:39.694 "superblock": false, 00:09:39.694 "num_base_bdevs": 4, 00:09:39.694 "num_base_bdevs_discovered": 2, 00:09:39.694 "num_base_bdevs_operational": 4, 00:09:39.694 "base_bdevs_list": [ 00:09:39.694 { 00:09:39.694 "name": null, 00:09:39.694 "uuid": "a23a102e-bde0-4a68-8fb0-8f23204047c1", 00:09:39.694 "is_configured": false, 00:09:39.694 "data_offset": 0, 00:09:39.694 "data_size": 65536 00:09:39.694 }, 00:09:39.694 { 00:09:39.694 "name": null, 00:09:39.694 "uuid": "d22874fb-7bd9-4040-b653-cda9ad4a2c33", 00:09:39.694 "is_configured": false, 00:09:39.694 "data_offset": 0, 00:09:39.694 "data_size": 65536 00:09:39.694 }, 00:09:39.694 { 00:09:39.694 "name": "BaseBdev3", 00:09:39.694 "uuid": "364f95f6-b4ec-46aa-bc06-8883f64fb47b", 00:09:39.694 "is_configured": true, 00:09:39.694 "data_offset": 0, 00:09:39.694 "data_size": 65536 00:09:39.694 }, 00:09:39.694 { 00:09:39.694 "name": "BaseBdev4", 00:09:39.694 "uuid": "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6", 00:09:39.694 "is_configured": true, 00:09:39.694 "data_offset": 0, 00:09:39.694 "data_size": 65536 00:09:39.694 } 00:09:39.694 ] 00:09:39.694 }' 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.694 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.290 [2024-11-28 02:25:13.835730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.290 "name": "Existed_Raid", 00:09:40.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.290 "strip_size_kb": 64, 00:09:40.290 "state": "configuring", 00:09:40.290 "raid_level": "raid0", 00:09:40.290 "superblock": false, 00:09:40.290 "num_base_bdevs": 4, 00:09:40.290 "num_base_bdevs_discovered": 3, 00:09:40.290 "num_base_bdevs_operational": 4, 00:09:40.290 "base_bdevs_list": [ 00:09:40.290 { 00:09:40.290 "name": null, 00:09:40.290 "uuid": "a23a102e-bde0-4a68-8fb0-8f23204047c1", 00:09:40.290 "is_configured": false, 00:09:40.290 "data_offset": 0, 00:09:40.290 "data_size": 65536 00:09:40.290 }, 00:09:40.290 { 00:09:40.290 "name": "BaseBdev2", 00:09:40.290 "uuid": "d22874fb-7bd9-4040-b653-cda9ad4a2c33", 00:09:40.290 "is_configured": true, 00:09:40.290 "data_offset": 0, 00:09:40.290 "data_size": 65536 00:09:40.290 }, 00:09:40.290 { 00:09:40.290 "name": "BaseBdev3", 00:09:40.290 "uuid": "364f95f6-b4ec-46aa-bc06-8883f64fb47b", 00:09:40.290 "is_configured": true, 00:09:40.290 "data_offset": 0, 00:09:40.290 "data_size": 65536 00:09:40.290 }, 00:09:40.290 { 00:09:40.290 "name": "BaseBdev4", 00:09:40.290 "uuid": "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6", 00:09:40.290 "is_configured": true, 00:09:40.290 "data_offset": 0, 00:09:40.290 "data_size": 65536 00:09:40.290 } 00:09:40.290 ] 00:09:40.290 }' 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.290 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a23a102e-bde0-4a68-8fb0-8f23204047c1 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.861 [2024-11-28 02:25:14.433372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:40.861 [2024-11-28 02:25:14.433438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.861 [2024-11-28 02:25:14.433446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:40.861 [2024-11-28 02:25:14.433751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:40.861 [2024-11-28 02:25:14.433934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.861 [2024-11-28 02:25:14.433949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:40.861 [2024-11-28 02:25:14.434284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.861 NewBaseBdev 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.861 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.861 [ 00:09:40.861 { 00:09:40.861 "name": "NewBaseBdev", 00:09:40.861 "aliases": [ 00:09:40.861 "a23a102e-bde0-4a68-8fb0-8f23204047c1" 00:09:40.861 ], 00:09:40.861 "product_name": "Malloc disk", 00:09:40.861 "block_size": 512, 00:09:40.861 "num_blocks": 65536, 00:09:40.861 "uuid": "a23a102e-bde0-4a68-8fb0-8f23204047c1", 00:09:40.861 "assigned_rate_limits": { 00:09:40.861 "rw_ios_per_sec": 0, 00:09:40.861 "rw_mbytes_per_sec": 0, 00:09:40.862 "r_mbytes_per_sec": 0, 00:09:40.862 "w_mbytes_per_sec": 0 00:09:40.862 }, 00:09:40.862 "claimed": true, 00:09:40.862 "claim_type": "exclusive_write", 00:09:40.862 "zoned": false, 00:09:40.862 "supported_io_types": { 00:09:40.862 "read": true, 00:09:40.862 "write": true, 00:09:40.862 "unmap": true, 00:09:40.862 "flush": true, 00:09:40.862 "reset": true, 00:09:40.862 "nvme_admin": false, 00:09:40.862 "nvme_io": false, 00:09:40.862 "nvme_io_md": false, 00:09:40.862 "write_zeroes": true, 00:09:40.862 "zcopy": true, 00:09:40.862 "get_zone_info": false, 00:09:40.862 "zone_management": false, 00:09:40.862 "zone_append": false, 00:09:40.862 "compare": false, 00:09:40.862 "compare_and_write": false, 00:09:40.862 "abort": true, 00:09:40.862 "seek_hole": false, 00:09:40.862 "seek_data": false, 00:09:40.862 "copy": true, 00:09:40.862 "nvme_iov_md": false 00:09:40.862 }, 00:09:40.862 "memory_domains": [ 00:09:40.862 { 00:09:40.862 "dma_device_id": "system", 00:09:40.862 "dma_device_type": 1 00:09:40.862 }, 00:09:40.862 { 00:09:40.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.862 "dma_device_type": 2 00:09:40.862 } 00:09:40.862 ], 00:09:40.862 "driver_specific": {} 00:09:40.862 } 00:09:40.862 ] 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.862 "name": "Existed_Raid", 00:09:40.862 "uuid": "37a64953-1929-4dd8-b9d8-56a24928a0cd", 00:09:40.862 "strip_size_kb": 64, 00:09:40.862 "state": "online", 00:09:40.862 "raid_level": "raid0", 00:09:40.862 "superblock": false, 00:09:40.862 "num_base_bdevs": 4, 00:09:40.862 "num_base_bdevs_discovered": 4, 00:09:40.862 "num_base_bdevs_operational": 4, 00:09:40.862 "base_bdevs_list": [ 00:09:40.862 { 00:09:40.862 "name": "NewBaseBdev", 00:09:40.862 "uuid": "a23a102e-bde0-4a68-8fb0-8f23204047c1", 00:09:40.862 "is_configured": true, 00:09:40.862 "data_offset": 0, 00:09:40.862 "data_size": 65536 00:09:40.862 }, 00:09:40.862 { 00:09:40.862 "name": "BaseBdev2", 00:09:40.862 "uuid": "d22874fb-7bd9-4040-b653-cda9ad4a2c33", 00:09:40.862 "is_configured": true, 00:09:40.862 "data_offset": 0, 00:09:40.862 "data_size": 65536 00:09:40.862 }, 00:09:40.862 { 00:09:40.862 "name": "BaseBdev3", 00:09:40.862 "uuid": "364f95f6-b4ec-46aa-bc06-8883f64fb47b", 00:09:40.862 "is_configured": true, 00:09:40.862 "data_offset": 0, 00:09:40.862 "data_size": 65536 00:09:40.862 }, 00:09:40.862 { 00:09:40.862 "name": "BaseBdev4", 00:09:40.862 "uuid": "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6", 00:09:40.862 "is_configured": true, 00:09:40.862 "data_offset": 0, 00:09:40.862 "data_size": 65536 00:09:40.862 } 00:09:40.862 ] 00:09:40.862 }' 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.862 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.431 [2024-11-28 02:25:14.901070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.431 "name": "Existed_Raid", 00:09:41.431 "aliases": [ 00:09:41.431 "37a64953-1929-4dd8-b9d8-56a24928a0cd" 00:09:41.431 ], 00:09:41.431 "product_name": "Raid Volume", 00:09:41.431 "block_size": 512, 00:09:41.431 "num_blocks": 262144, 00:09:41.431 "uuid": "37a64953-1929-4dd8-b9d8-56a24928a0cd", 00:09:41.431 "assigned_rate_limits": { 00:09:41.431 "rw_ios_per_sec": 0, 00:09:41.431 "rw_mbytes_per_sec": 0, 00:09:41.431 "r_mbytes_per_sec": 0, 00:09:41.431 "w_mbytes_per_sec": 0 00:09:41.431 }, 00:09:41.431 "claimed": false, 00:09:41.431 "zoned": false, 00:09:41.431 "supported_io_types": { 00:09:41.431 "read": true, 00:09:41.431 "write": true, 00:09:41.431 "unmap": true, 00:09:41.431 "flush": true, 00:09:41.431 "reset": true, 00:09:41.431 "nvme_admin": false, 00:09:41.431 "nvme_io": false, 00:09:41.431 "nvme_io_md": false, 00:09:41.431 "write_zeroes": true, 00:09:41.431 "zcopy": false, 00:09:41.431 "get_zone_info": false, 00:09:41.431 "zone_management": false, 00:09:41.431 "zone_append": false, 00:09:41.431 "compare": false, 00:09:41.431 "compare_and_write": false, 00:09:41.431 "abort": false, 00:09:41.431 "seek_hole": false, 00:09:41.431 "seek_data": false, 00:09:41.431 "copy": false, 00:09:41.431 "nvme_iov_md": false 00:09:41.431 }, 00:09:41.431 "memory_domains": [ 00:09:41.431 { 00:09:41.431 "dma_device_id": "system", 00:09:41.431 "dma_device_type": 1 00:09:41.431 }, 00:09:41.431 { 00:09:41.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.431 "dma_device_type": 2 00:09:41.431 }, 00:09:41.431 { 00:09:41.431 "dma_device_id": "system", 00:09:41.431 "dma_device_type": 1 00:09:41.431 }, 00:09:41.431 { 00:09:41.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.431 "dma_device_type": 2 00:09:41.431 }, 00:09:41.431 { 00:09:41.431 "dma_device_id": "system", 00:09:41.431 "dma_device_type": 1 00:09:41.431 }, 00:09:41.431 { 00:09:41.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.431 "dma_device_type": 2 00:09:41.431 }, 00:09:41.431 { 00:09:41.431 "dma_device_id": "system", 00:09:41.431 "dma_device_type": 1 00:09:41.431 }, 00:09:41.431 { 00:09:41.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.431 "dma_device_type": 2 00:09:41.431 } 00:09:41.431 ], 00:09:41.431 "driver_specific": { 00:09:41.431 "raid": { 00:09:41.431 "uuid": "37a64953-1929-4dd8-b9d8-56a24928a0cd", 00:09:41.431 "strip_size_kb": 64, 00:09:41.431 "state": "online", 00:09:41.431 "raid_level": "raid0", 00:09:41.431 "superblock": false, 00:09:41.431 "num_base_bdevs": 4, 00:09:41.431 "num_base_bdevs_discovered": 4, 00:09:41.431 "num_base_bdevs_operational": 4, 00:09:41.431 "base_bdevs_list": [ 00:09:41.431 { 00:09:41.431 "name": "NewBaseBdev", 00:09:41.431 "uuid": "a23a102e-bde0-4a68-8fb0-8f23204047c1", 00:09:41.431 "is_configured": true, 00:09:41.431 "data_offset": 0, 00:09:41.431 "data_size": 65536 00:09:41.431 }, 00:09:41.431 { 00:09:41.431 "name": "BaseBdev2", 00:09:41.431 "uuid": "d22874fb-7bd9-4040-b653-cda9ad4a2c33", 00:09:41.431 "is_configured": true, 00:09:41.431 "data_offset": 0, 00:09:41.431 "data_size": 65536 00:09:41.431 }, 00:09:41.431 { 00:09:41.431 "name": "BaseBdev3", 00:09:41.431 "uuid": "364f95f6-b4ec-46aa-bc06-8883f64fb47b", 00:09:41.431 "is_configured": true, 00:09:41.431 "data_offset": 0, 00:09:41.431 "data_size": 65536 00:09:41.431 }, 00:09:41.431 { 00:09:41.431 "name": "BaseBdev4", 00:09:41.431 "uuid": "5a55fa83-eb4f-44b9-a87e-33bc8ffc85f6", 00:09:41.431 "is_configured": true, 00:09:41.431 "data_offset": 0, 00:09:41.431 "data_size": 65536 00:09:41.431 } 00:09:41.431 ] 00:09:41.431 } 00:09:41.431 } 00:09:41.431 }' 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:41.431 BaseBdev2 00:09:41.431 BaseBdev3 00:09:41.431 BaseBdev4' 00:09:41.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.689 [2024-11-28 02:25:15.196197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.689 [2024-11-28 02:25:15.196248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.689 [2024-11-28 02:25:15.196364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.689 [2024-11-28 02:25:15.196453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.689 [2024-11-28 02:25:15.196465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69166 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69166 ']' 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69166 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69166 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.689 killing process with pid 69166 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69166' 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69166 00:09:41.689 [2024-11-28 02:25:15.244144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.689 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69166 00:09:42.257 [2024-11-28 02:25:15.677079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:43.638 00:09:43.638 real 0m11.686s 00:09:43.638 user 0m18.263s 00:09:43.638 sys 0m2.211s 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.638 ************************************ 00:09:43.638 END TEST raid_state_function_test 00:09:43.638 ************************************ 00:09:43.638 02:25:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:43.638 02:25:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:43.638 02:25:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.638 02:25:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.638 ************************************ 00:09:43.638 START TEST raid_state_function_test_sb 00:09:43.638 ************************************ 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:43.638 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69843 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69843' 00:09:43.639 Process raid pid: 69843 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69843 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69843 ']' 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.639 02:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.639 [2024-11-28 02:25:17.053397] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:43.639 [2024-11-28 02:25:17.053580] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.639 [2024-11-28 02:25:17.225529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.899 [2024-11-28 02:25:17.361320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.159 [2024-11-28 02:25:17.598829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.159 [2024-11-28 02:25:17.598971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.419 [2024-11-28 02:25:17.898655] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.419 [2024-11-28 02:25:17.898827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.419 [2024-11-28 02:25:17.898861] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.419 [2024-11-28 02:25:17.898886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.419 [2024-11-28 02:25:17.898905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.419 [2024-11-28 02:25:17.898942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.419 [2024-11-28 02:25:17.898963] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:44.419 [2024-11-28 02:25:17.898987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.419 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.419 "name": "Existed_Raid", 00:09:44.419 "uuid": "5ecf2b6c-97b4-4507-85a0-d2759e3cbb46", 00:09:44.419 "strip_size_kb": 64, 00:09:44.419 "state": "configuring", 00:09:44.419 "raid_level": "raid0", 00:09:44.419 "superblock": true, 00:09:44.419 "num_base_bdevs": 4, 00:09:44.419 "num_base_bdevs_discovered": 0, 00:09:44.419 "num_base_bdevs_operational": 4, 00:09:44.419 "base_bdevs_list": [ 00:09:44.419 { 00:09:44.419 "name": "BaseBdev1", 00:09:44.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.419 "is_configured": false, 00:09:44.419 "data_offset": 0, 00:09:44.419 "data_size": 0 00:09:44.419 }, 00:09:44.419 { 00:09:44.419 "name": "BaseBdev2", 00:09:44.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.419 "is_configured": false, 00:09:44.420 "data_offset": 0, 00:09:44.420 "data_size": 0 00:09:44.420 }, 00:09:44.420 { 00:09:44.420 "name": "BaseBdev3", 00:09:44.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.420 "is_configured": false, 00:09:44.420 "data_offset": 0, 00:09:44.420 "data_size": 0 00:09:44.420 }, 00:09:44.420 { 00:09:44.420 "name": "BaseBdev4", 00:09:44.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.420 "is_configured": false, 00:09:44.420 "data_offset": 0, 00:09:44.420 "data_size": 0 00:09:44.420 } 00:09:44.420 ] 00:09:44.420 }' 00:09:44.420 02:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.420 02:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.680 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.680 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.680 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.680 [2024-11-28 02:25:18.325908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.680 [2024-11-28 02:25:18.326078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:44.680 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.680 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:44.680 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.680 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.680 [2024-11-28 02:25:18.333852] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.680 [2024-11-28 02:25:18.333964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.680 [2024-11-28 02:25:18.333996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.680 [2024-11-28 02:25:18.334019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.680 [2024-11-28 02:25:18.334036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.680 [2024-11-28 02:25:18.334074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.681 [2024-11-28 02:25:18.334082] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:44.681 [2024-11-28 02:25:18.334092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:44.681 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.681 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.681 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.681 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.940 [2024-11-28 02:25:18.385281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.940 BaseBdev1 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.940 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.940 [ 00:09:44.940 { 00:09:44.940 "name": "BaseBdev1", 00:09:44.940 "aliases": [ 00:09:44.940 "61bb2a44-1a39-43fb-bde0-b67b147c75d5" 00:09:44.940 ], 00:09:44.940 "product_name": "Malloc disk", 00:09:44.940 "block_size": 512, 00:09:44.940 "num_blocks": 65536, 00:09:44.940 "uuid": "61bb2a44-1a39-43fb-bde0-b67b147c75d5", 00:09:44.940 "assigned_rate_limits": { 00:09:44.940 "rw_ios_per_sec": 0, 00:09:44.940 "rw_mbytes_per_sec": 0, 00:09:44.940 "r_mbytes_per_sec": 0, 00:09:44.940 "w_mbytes_per_sec": 0 00:09:44.940 }, 00:09:44.940 "claimed": true, 00:09:44.940 "claim_type": "exclusive_write", 00:09:44.940 "zoned": false, 00:09:44.940 "supported_io_types": { 00:09:44.940 "read": true, 00:09:44.940 "write": true, 00:09:44.940 "unmap": true, 00:09:44.940 "flush": true, 00:09:44.940 "reset": true, 00:09:44.941 "nvme_admin": false, 00:09:44.941 "nvme_io": false, 00:09:44.941 "nvme_io_md": false, 00:09:44.941 "write_zeroes": true, 00:09:44.941 "zcopy": true, 00:09:44.941 "get_zone_info": false, 00:09:44.941 "zone_management": false, 00:09:44.941 "zone_append": false, 00:09:44.941 "compare": false, 00:09:44.941 "compare_and_write": false, 00:09:44.941 "abort": true, 00:09:44.941 "seek_hole": false, 00:09:44.941 "seek_data": false, 00:09:44.941 "copy": true, 00:09:44.941 "nvme_iov_md": false 00:09:44.941 }, 00:09:44.941 "memory_domains": [ 00:09:44.941 { 00:09:44.941 "dma_device_id": "system", 00:09:44.941 "dma_device_type": 1 00:09:44.941 }, 00:09:44.941 { 00:09:44.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.941 "dma_device_type": 2 00:09:44.941 } 00:09:44.941 ], 00:09:44.941 "driver_specific": {} 00:09:44.941 } 00:09:44.941 ] 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.941 "name": "Existed_Raid", 00:09:44.941 "uuid": "2af8f260-ce4e-4a8a-9036-3c09bc701562", 00:09:44.941 "strip_size_kb": 64, 00:09:44.941 "state": "configuring", 00:09:44.941 "raid_level": "raid0", 00:09:44.941 "superblock": true, 00:09:44.941 "num_base_bdevs": 4, 00:09:44.941 "num_base_bdevs_discovered": 1, 00:09:44.941 "num_base_bdevs_operational": 4, 00:09:44.941 "base_bdevs_list": [ 00:09:44.941 { 00:09:44.941 "name": "BaseBdev1", 00:09:44.941 "uuid": "61bb2a44-1a39-43fb-bde0-b67b147c75d5", 00:09:44.941 "is_configured": true, 00:09:44.941 "data_offset": 2048, 00:09:44.941 "data_size": 63488 00:09:44.941 }, 00:09:44.941 { 00:09:44.941 "name": "BaseBdev2", 00:09:44.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.941 "is_configured": false, 00:09:44.941 "data_offset": 0, 00:09:44.941 "data_size": 0 00:09:44.941 }, 00:09:44.941 { 00:09:44.941 "name": "BaseBdev3", 00:09:44.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.941 "is_configured": false, 00:09:44.941 "data_offset": 0, 00:09:44.941 "data_size": 0 00:09:44.941 }, 00:09:44.941 { 00:09:44.941 "name": "BaseBdev4", 00:09:44.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.941 "is_configured": false, 00:09:44.941 "data_offset": 0, 00:09:44.941 "data_size": 0 00:09:44.941 } 00:09:44.941 ] 00:09:44.941 }' 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.941 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.511 [2024-11-28 02:25:18.892514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.511 [2024-11-28 02:25:18.892681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.511 [2024-11-28 02:25:18.904595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.511 [2024-11-28 02:25:18.906902] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.511 [2024-11-28 02:25:18.906963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.511 [2024-11-28 02:25:18.906992] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.511 [2024-11-28 02:25:18.907003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.511 [2024-11-28 02:25:18.907010] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:45.511 [2024-11-28 02:25:18.907019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.511 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.512 "name": "Existed_Raid", 00:09:45.512 "uuid": "21c27b2e-9fa1-4822-8db8-6c8913120cf9", 00:09:45.512 "strip_size_kb": 64, 00:09:45.512 "state": "configuring", 00:09:45.512 "raid_level": "raid0", 00:09:45.512 "superblock": true, 00:09:45.512 "num_base_bdevs": 4, 00:09:45.512 "num_base_bdevs_discovered": 1, 00:09:45.512 "num_base_bdevs_operational": 4, 00:09:45.512 "base_bdevs_list": [ 00:09:45.512 { 00:09:45.512 "name": "BaseBdev1", 00:09:45.512 "uuid": "61bb2a44-1a39-43fb-bde0-b67b147c75d5", 00:09:45.512 "is_configured": true, 00:09:45.512 "data_offset": 2048, 00:09:45.512 "data_size": 63488 00:09:45.512 }, 00:09:45.512 { 00:09:45.512 "name": "BaseBdev2", 00:09:45.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.512 "is_configured": false, 00:09:45.512 "data_offset": 0, 00:09:45.512 "data_size": 0 00:09:45.512 }, 00:09:45.512 { 00:09:45.512 "name": "BaseBdev3", 00:09:45.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.512 "is_configured": false, 00:09:45.512 "data_offset": 0, 00:09:45.512 "data_size": 0 00:09:45.512 }, 00:09:45.512 { 00:09:45.512 "name": "BaseBdev4", 00:09:45.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.512 "is_configured": false, 00:09:45.512 "data_offset": 0, 00:09:45.512 "data_size": 0 00:09:45.512 } 00:09:45.512 ] 00:09:45.512 }' 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.512 02:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.772 [2024-11-28 02:25:19.392816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.772 BaseBdev2 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.772 [ 00:09:45.772 { 00:09:45.772 "name": "BaseBdev2", 00:09:45.772 "aliases": [ 00:09:45.772 "1a06034b-6384-41b3-b650-52120995e93b" 00:09:45.772 ], 00:09:45.772 "product_name": "Malloc disk", 00:09:45.772 "block_size": 512, 00:09:45.772 "num_blocks": 65536, 00:09:45.772 "uuid": "1a06034b-6384-41b3-b650-52120995e93b", 00:09:45.772 "assigned_rate_limits": { 00:09:45.772 "rw_ios_per_sec": 0, 00:09:45.772 "rw_mbytes_per_sec": 0, 00:09:45.772 "r_mbytes_per_sec": 0, 00:09:45.772 "w_mbytes_per_sec": 0 00:09:45.772 }, 00:09:45.772 "claimed": true, 00:09:45.772 "claim_type": "exclusive_write", 00:09:45.772 "zoned": false, 00:09:45.772 "supported_io_types": { 00:09:45.772 "read": true, 00:09:45.772 "write": true, 00:09:45.772 "unmap": true, 00:09:45.772 "flush": true, 00:09:45.772 "reset": true, 00:09:45.772 "nvme_admin": false, 00:09:45.772 "nvme_io": false, 00:09:45.772 "nvme_io_md": false, 00:09:45.772 "write_zeroes": true, 00:09:45.772 "zcopy": true, 00:09:45.772 "get_zone_info": false, 00:09:45.772 "zone_management": false, 00:09:45.772 "zone_append": false, 00:09:45.772 "compare": false, 00:09:45.772 "compare_and_write": false, 00:09:45.772 "abort": true, 00:09:45.772 "seek_hole": false, 00:09:45.772 "seek_data": false, 00:09:45.772 "copy": true, 00:09:45.772 "nvme_iov_md": false 00:09:45.772 }, 00:09:45.772 "memory_domains": [ 00:09:45.772 { 00:09:45.772 "dma_device_id": "system", 00:09:45.772 "dma_device_type": 1 00:09:45.772 }, 00:09:45.772 { 00:09:45.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.772 "dma_device_type": 2 00:09:45.772 } 00:09:45.772 ], 00:09:45.772 "driver_specific": {} 00:09:45.772 } 00:09:45.772 ] 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.772 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.033 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.033 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.033 "name": "Existed_Raid", 00:09:46.033 "uuid": "21c27b2e-9fa1-4822-8db8-6c8913120cf9", 00:09:46.033 "strip_size_kb": 64, 00:09:46.033 "state": "configuring", 00:09:46.033 "raid_level": "raid0", 00:09:46.033 "superblock": true, 00:09:46.033 "num_base_bdevs": 4, 00:09:46.033 "num_base_bdevs_discovered": 2, 00:09:46.033 "num_base_bdevs_operational": 4, 00:09:46.033 "base_bdevs_list": [ 00:09:46.033 { 00:09:46.033 "name": "BaseBdev1", 00:09:46.033 "uuid": "61bb2a44-1a39-43fb-bde0-b67b147c75d5", 00:09:46.033 "is_configured": true, 00:09:46.033 "data_offset": 2048, 00:09:46.033 "data_size": 63488 00:09:46.033 }, 00:09:46.033 { 00:09:46.033 "name": "BaseBdev2", 00:09:46.033 "uuid": "1a06034b-6384-41b3-b650-52120995e93b", 00:09:46.033 "is_configured": true, 00:09:46.033 "data_offset": 2048, 00:09:46.033 "data_size": 63488 00:09:46.033 }, 00:09:46.033 { 00:09:46.033 "name": "BaseBdev3", 00:09:46.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.033 "is_configured": false, 00:09:46.033 "data_offset": 0, 00:09:46.033 "data_size": 0 00:09:46.033 }, 00:09:46.033 { 00:09:46.033 "name": "BaseBdev4", 00:09:46.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.033 "is_configured": false, 00:09:46.033 "data_offset": 0, 00:09:46.033 "data_size": 0 00:09:46.033 } 00:09:46.033 ] 00:09:46.033 }' 00:09:46.033 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.033 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.293 [2024-11-28 02:25:19.903962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.293 BaseBdev3 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.293 [ 00:09:46.293 { 00:09:46.293 "name": "BaseBdev3", 00:09:46.293 "aliases": [ 00:09:46.293 "0f9f23dd-af18-4abb-be9d-019b159a5ae6" 00:09:46.293 ], 00:09:46.293 "product_name": "Malloc disk", 00:09:46.293 "block_size": 512, 00:09:46.293 "num_blocks": 65536, 00:09:46.293 "uuid": "0f9f23dd-af18-4abb-be9d-019b159a5ae6", 00:09:46.293 "assigned_rate_limits": { 00:09:46.293 "rw_ios_per_sec": 0, 00:09:46.293 "rw_mbytes_per_sec": 0, 00:09:46.293 "r_mbytes_per_sec": 0, 00:09:46.293 "w_mbytes_per_sec": 0 00:09:46.293 }, 00:09:46.293 "claimed": true, 00:09:46.293 "claim_type": "exclusive_write", 00:09:46.293 "zoned": false, 00:09:46.293 "supported_io_types": { 00:09:46.293 "read": true, 00:09:46.293 "write": true, 00:09:46.293 "unmap": true, 00:09:46.293 "flush": true, 00:09:46.293 "reset": true, 00:09:46.293 "nvme_admin": false, 00:09:46.293 "nvme_io": false, 00:09:46.293 "nvme_io_md": false, 00:09:46.293 "write_zeroes": true, 00:09:46.293 "zcopy": true, 00:09:46.293 "get_zone_info": false, 00:09:46.293 "zone_management": false, 00:09:46.293 "zone_append": false, 00:09:46.293 "compare": false, 00:09:46.293 "compare_and_write": false, 00:09:46.293 "abort": true, 00:09:46.293 "seek_hole": false, 00:09:46.293 "seek_data": false, 00:09:46.293 "copy": true, 00:09:46.293 "nvme_iov_md": false 00:09:46.293 }, 00:09:46.293 "memory_domains": [ 00:09:46.293 { 00:09:46.293 "dma_device_id": "system", 00:09:46.293 "dma_device_type": 1 00:09:46.293 }, 00:09:46.293 { 00:09:46.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.293 "dma_device_type": 2 00:09:46.293 } 00:09:46.293 ], 00:09:46.293 "driver_specific": {} 00:09:46.293 } 00:09:46.293 ] 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.293 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.294 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.294 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.294 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.294 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.294 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.294 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.294 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.294 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.294 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.294 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.553 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.553 "name": "Existed_Raid", 00:09:46.553 "uuid": "21c27b2e-9fa1-4822-8db8-6c8913120cf9", 00:09:46.553 "strip_size_kb": 64, 00:09:46.553 "state": "configuring", 00:09:46.553 "raid_level": "raid0", 00:09:46.553 "superblock": true, 00:09:46.553 "num_base_bdevs": 4, 00:09:46.553 "num_base_bdevs_discovered": 3, 00:09:46.553 "num_base_bdevs_operational": 4, 00:09:46.553 "base_bdevs_list": [ 00:09:46.553 { 00:09:46.553 "name": "BaseBdev1", 00:09:46.553 "uuid": "61bb2a44-1a39-43fb-bde0-b67b147c75d5", 00:09:46.553 "is_configured": true, 00:09:46.553 "data_offset": 2048, 00:09:46.553 "data_size": 63488 00:09:46.553 }, 00:09:46.553 { 00:09:46.553 "name": "BaseBdev2", 00:09:46.553 "uuid": "1a06034b-6384-41b3-b650-52120995e93b", 00:09:46.553 "is_configured": true, 00:09:46.553 "data_offset": 2048, 00:09:46.553 "data_size": 63488 00:09:46.553 }, 00:09:46.553 { 00:09:46.553 "name": "BaseBdev3", 00:09:46.553 "uuid": "0f9f23dd-af18-4abb-be9d-019b159a5ae6", 00:09:46.553 "is_configured": true, 00:09:46.553 "data_offset": 2048, 00:09:46.553 "data_size": 63488 00:09:46.553 }, 00:09:46.553 { 00:09:46.553 "name": "BaseBdev4", 00:09:46.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.553 "is_configured": false, 00:09:46.553 "data_offset": 0, 00:09:46.553 "data_size": 0 00:09:46.553 } 00:09:46.553 ] 00:09:46.553 }' 00:09:46.553 02:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.553 02:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.813 [2024-11-28 02:25:20.423768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:46.813 [2024-11-28 02:25:20.424248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.813 [2024-11-28 02:25:20.424309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:46.813 [2024-11-28 02:25:20.424663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:46.813 BaseBdev4 00:09:46.813 [2024-11-28 02:25:20.424897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.813 [2024-11-28 02:25:20.424911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:46.813 [2024-11-28 02:25:20.425084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.813 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.813 [ 00:09:46.813 { 00:09:46.813 "name": "BaseBdev4", 00:09:46.813 "aliases": [ 00:09:46.813 "3ddc38e0-2461-4676-9d23-5578c72542d3" 00:09:46.813 ], 00:09:46.813 "product_name": "Malloc disk", 00:09:46.813 "block_size": 512, 00:09:46.813 "num_blocks": 65536, 00:09:46.813 "uuid": "3ddc38e0-2461-4676-9d23-5578c72542d3", 00:09:46.813 "assigned_rate_limits": { 00:09:46.813 "rw_ios_per_sec": 0, 00:09:46.813 "rw_mbytes_per_sec": 0, 00:09:46.813 "r_mbytes_per_sec": 0, 00:09:46.813 "w_mbytes_per_sec": 0 00:09:46.813 }, 00:09:46.813 "claimed": true, 00:09:46.813 "claim_type": "exclusive_write", 00:09:46.813 "zoned": false, 00:09:46.813 "supported_io_types": { 00:09:46.813 "read": true, 00:09:46.813 "write": true, 00:09:46.813 "unmap": true, 00:09:46.813 "flush": true, 00:09:46.813 "reset": true, 00:09:46.813 "nvme_admin": false, 00:09:46.813 "nvme_io": false, 00:09:46.813 "nvme_io_md": false, 00:09:46.813 "write_zeroes": true, 00:09:46.813 "zcopy": true, 00:09:46.813 "get_zone_info": false, 00:09:46.813 "zone_management": false, 00:09:46.813 "zone_append": false, 00:09:46.813 "compare": false, 00:09:46.813 "compare_and_write": false, 00:09:46.814 "abort": true, 00:09:46.814 "seek_hole": false, 00:09:46.814 "seek_data": false, 00:09:46.814 "copy": true, 00:09:46.814 "nvme_iov_md": false 00:09:46.814 }, 00:09:46.814 "memory_domains": [ 00:09:46.814 { 00:09:46.814 "dma_device_id": "system", 00:09:46.814 "dma_device_type": 1 00:09:46.814 }, 00:09:46.814 { 00:09:46.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.814 "dma_device_type": 2 00:09:46.814 } 00:09:46.814 ], 00:09:46.814 "driver_specific": {} 00:09:46.814 } 00:09:46.814 ] 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.814 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.075 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.075 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.075 "name": "Existed_Raid", 00:09:47.075 "uuid": "21c27b2e-9fa1-4822-8db8-6c8913120cf9", 00:09:47.075 "strip_size_kb": 64, 00:09:47.075 "state": "online", 00:09:47.075 "raid_level": "raid0", 00:09:47.075 "superblock": true, 00:09:47.075 "num_base_bdevs": 4, 00:09:47.075 "num_base_bdevs_discovered": 4, 00:09:47.075 "num_base_bdevs_operational": 4, 00:09:47.075 "base_bdevs_list": [ 00:09:47.075 { 00:09:47.075 "name": "BaseBdev1", 00:09:47.075 "uuid": "61bb2a44-1a39-43fb-bde0-b67b147c75d5", 00:09:47.075 "is_configured": true, 00:09:47.075 "data_offset": 2048, 00:09:47.075 "data_size": 63488 00:09:47.075 }, 00:09:47.075 { 00:09:47.075 "name": "BaseBdev2", 00:09:47.075 "uuid": "1a06034b-6384-41b3-b650-52120995e93b", 00:09:47.075 "is_configured": true, 00:09:47.075 "data_offset": 2048, 00:09:47.075 "data_size": 63488 00:09:47.075 }, 00:09:47.075 { 00:09:47.075 "name": "BaseBdev3", 00:09:47.075 "uuid": "0f9f23dd-af18-4abb-be9d-019b159a5ae6", 00:09:47.075 "is_configured": true, 00:09:47.075 "data_offset": 2048, 00:09:47.075 "data_size": 63488 00:09:47.075 }, 00:09:47.075 { 00:09:47.075 "name": "BaseBdev4", 00:09:47.075 "uuid": "3ddc38e0-2461-4676-9d23-5578c72542d3", 00:09:47.075 "is_configured": true, 00:09:47.075 "data_offset": 2048, 00:09:47.075 "data_size": 63488 00:09:47.075 } 00:09:47.075 ] 00:09:47.075 }' 00:09:47.075 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.075 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.335 [2024-11-28 02:25:20.899561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.335 "name": "Existed_Raid", 00:09:47.335 "aliases": [ 00:09:47.335 "21c27b2e-9fa1-4822-8db8-6c8913120cf9" 00:09:47.335 ], 00:09:47.335 "product_name": "Raid Volume", 00:09:47.335 "block_size": 512, 00:09:47.335 "num_blocks": 253952, 00:09:47.335 "uuid": "21c27b2e-9fa1-4822-8db8-6c8913120cf9", 00:09:47.335 "assigned_rate_limits": { 00:09:47.335 "rw_ios_per_sec": 0, 00:09:47.335 "rw_mbytes_per_sec": 0, 00:09:47.335 "r_mbytes_per_sec": 0, 00:09:47.335 "w_mbytes_per_sec": 0 00:09:47.335 }, 00:09:47.335 "claimed": false, 00:09:47.335 "zoned": false, 00:09:47.335 "supported_io_types": { 00:09:47.335 "read": true, 00:09:47.335 "write": true, 00:09:47.335 "unmap": true, 00:09:47.335 "flush": true, 00:09:47.335 "reset": true, 00:09:47.335 "nvme_admin": false, 00:09:47.335 "nvme_io": false, 00:09:47.335 "nvme_io_md": false, 00:09:47.335 "write_zeroes": true, 00:09:47.335 "zcopy": false, 00:09:47.335 "get_zone_info": false, 00:09:47.335 "zone_management": false, 00:09:47.335 "zone_append": false, 00:09:47.335 "compare": false, 00:09:47.335 "compare_and_write": false, 00:09:47.335 "abort": false, 00:09:47.335 "seek_hole": false, 00:09:47.335 "seek_data": false, 00:09:47.335 "copy": false, 00:09:47.335 "nvme_iov_md": false 00:09:47.335 }, 00:09:47.335 "memory_domains": [ 00:09:47.335 { 00:09:47.335 "dma_device_id": "system", 00:09:47.335 "dma_device_type": 1 00:09:47.335 }, 00:09:47.335 { 00:09:47.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.335 "dma_device_type": 2 00:09:47.335 }, 00:09:47.335 { 00:09:47.335 "dma_device_id": "system", 00:09:47.335 "dma_device_type": 1 00:09:47.335 }, 00:09:47.335 { 00:09:47.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.335 "dma_device_type": 2 00:09:47.335 }, 00:09:47.335 { 00:09:47.335 "dma_device_id": "system", 00:09:47.335 "dma_device_type": 1 00:09:47.335 }, 00:09:47.335 { 00:09:47.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.335 "dma_device_type": 2 00:09:47.335 }, 00:09:47.335 { 00:09:47.335 "dma_device_id": "system", 00:09:47.335 "dma_device_type": 1 00:09:47.335 }, 00:09:47.335 { 00:09:47.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.335 "dma_device_type": 2 00:09:47.335 } 00:09:47.335 ], 00:09:47.335 "driver_specific": { 00:09:47.335 "raid": { 00:09:47.335 "uuid": "21c27b2e-9fa1-4822-8db8-6c8913120cf9", 00:09:47.335 "strip_size_kb": 64, 00:09:47.335 "state": "online", 00:09:47.335 "raid_level": "raid0", 00:09:47.335 "superblock": true, 00:09:47.335 "num_base_bdevs": 4, 00:09:47.335 "num_base_bdevs_discovered": 4, 00:09:47.335 "num_base_bdevs_operational": 4, 00:09:47.335 "base_bdevs_list": [ 00:09:47.335 { 00:09:47.335 "name": "BaseBdev1", 00:09:47.335 "uuid": "61bb2a44-1a39-43fb-bde0-b67b147c75d5", 00:09:47.335 "is_configured": true, 00:09:47.335 "data_offset": 2048, 00:09:47.335 "data_size": 63488 00:09:47.335 }, 00:09:47.335 { 00:09:47.335 "name": "BaseBdev2", 00:09:47.335 "uuid": "1a06034b-6384-41b3-b650-52120995e93b", 00:09:47.335 "is_configured": true, 00:09:47.335 "data_offset": 2048, 00:09:47.335 "data_size": 63488 00:09:47.335 }, 00:09:47.335 { 00:09:47.335 "name": "BaseBdev3", 00:09:47.335 "uuid": "0f9f23dd-af18-4abb-be9d-019b159a5ae6", 00:09:47.335 "is_configured": true, 00:09:47.335 "data_offset": 2048, 00:09:47.335 "data_size": 63488 00:09:47.335 }, 00:09:47.335 { 00:09:47.335 "name": "BaseBdev4", 00:09:47.335 "uuid": "3ddc38e0-2461-4676-9d23-5578c72542d3", 00:09:47.335 "is_configured": true, 00:09:47.335 "data_offset": 2048, 00:09:47.335 "data_size": 63488 00:09:47.335 } 00:09:47.335 ] 00:09:47.335 } 00:09:47.335 } 00:09:47.335 }' 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:47.335 BaseBdev2 00:09:47.335 BaseBdev3 00:09:47.335 BaseBdev4' 00:09:47.335 02:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.595 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 [2024-11-28 02:25:21.218664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:47.595 [2024-11-28 02:25:21.218710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.595 [2024-11-28 02:25:21.218771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.855 "name": "Existed_Raid", 00:09:47.855 "uuid": "21c27b2e-9fa1-4822-8db8-6c8913120cf9", 00:09:47.855 "strip_size_kb": 64, 00:09:47.855 "state": "offline", 00:09:47.855 "raid_level": "raid0", 00:09:47.855 "superblock": true, 00:09:47.855 "num_base_bdevs": 4, 00:09:47.855 "num_base_bdevs_discovered": 3, 00:09:47.855 "num_base_bdevs_operational": 3, 00:09:47.855 "base_bdevs_list": [ 00:09:47.855 { 00:09:47.855 "name": null, 00:09:47.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.855 "is_configured": false, 00:09:47.855 "data_offset": 0, 00:09:47.855 "data_size": 63488 00:09:47.855 }, 00:09:47.855 { 00:09:47.855 "name": "BaseBdev2", 00:09:47.855 "uuid": "1a06034b-6384-41b3-b650-52120995e93b", 00:09:47.855 "is_configured": true, 00:09:47.855 "data_offset": 2048, 00:09:47.855 "data_size": 63488 00:09:47.855 }, 00:09:47.855 { 00:09:47.855 "name": "BaseBdev3", 00:09:47.855 "uuid": "0f9f23dd-af18-4abb-be9d-019b159a5ae6", 00:09:47.855 "is_configured": true, 00:09:47.855 "data_offset": 2048, 00:09:47.855 "data_size": 63488 00:09:47.855 }, 00:09:47.855 { 00:09:47.855 "name": "BaseBdev4", 00:09:47.855 "uuid": "3ddc38e0-2461-4676-9d23-5578c72542d3", 00:09:47.855 "is_configured": true, 00:09:47.855 "data_offset": 2048, 00:09:47.855 "data_size": 63488 00:09:47.855 } 00:09:47.855 ] 00:09:47.855 }' 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.855 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.115 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:48.115 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.115 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.115 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.115 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.115 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.115 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.374 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.374 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.374 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.375 [2024-11-28 02:25:21.811779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.375 02:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.375 [2024-11-28 02:25:21.969716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.634 [2024-11-28 02:25:22.128755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:48.634 [2024-11-28 02:25:22.128901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.634 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.895 BaseBdev2 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.895 [ 00:09:48.895 { 00:09:48.895 "name": "BaseBdev2", 00:09:48.895 "aliases": [ 00:09:48.895 "cea83ba4-37f9-4a7c-836a-aef038439cb7" 00:09:48.895 ], 00:09:48.895 "product_name": "Malloc disk", 00:09:48.895 "block_size": 512, 00:09:48.895 "num_blocks": 65536, 00:09:48.895 "uuid": "cea83ba4-37f9-4a7c-836a-aef038439cb7", 00:09:48.895 "assigned_rate_limits": { 00:09:48.895 "rw_ios_per_sec": 0, 00:09:48.895 "rw_mbytes_per_sec": 0, 00:09:48.895 "r_mbytes_per_sec": 0, 00:09:48.895 "w_mbytes_per_sec": 0 00:09:48.895 }, 00:09:48.895 "claimed": false, 00:09:48.895 "zoned": false, 00:09:48.895 "supported_io_types": { 00:09:48.895 "read": true, 00:09:48.895 "write": true, 00:09:48.895 "unmap": true, 00:09:48.895 "flush": true, 00:09:48.895 "reset": true, 00:09:48.895 "nvme_admin": false, 00:09:48.895 "nvme_io": false, 00:09:48.895 "nvme_io_md": false, 00:09:48.895 "write_zeroes": true, 00:09:48.895 "zcopy": true, 00:09:48.895 "get_zone_info": false, 00:09:48.895 "zone_management": false, 00:09:48.895 "zone_append": false, 00:09:48.895 "compare": false, 00:09:48.895 "compare_and_write": false, 00:09:48.895 "abort": true, 00:09:48.895 "seek_hole": false, 00:09:48.895 "seek_data": false, 00:09:48.895 "copy": true, 00:09:48.895 "nvme_iov_md": false 00:09:48.895 }, 00:09:48.895 "memory_domains": [ 00:09:48.895 { 00:09:48.895 "dma_device_id": "system", 00:09:48.895 "dma_device_type": 1 00:09:48.895 }, 00:09:48.895 { 00:09:48.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.895 "dma_device_type": 2 00:09:48.895 } 00:09:48.895 ], 00:09:48.895 "driver_specific": {} 00:09:48.895 } 00:09:48.895 ] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.895 BaseBdev3 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.895 [ 00:09:48.895 { 00:09:48.895 "name": "BaseBdev3", 00:09:48.895 "aliases": [ 00:09:48.895 "227e7804-9d97-4567-9943-6a85dc6ad9aa" 00:09:48.895 ], 00:09:48.895 "product_name": "Malloc disk", 00:09:48.895 "block_size": 512, 00:09:48.895 "num_blocks": 65536, 00:09:48.895 "uuid": "227e7804-9d97-4567-9943-6a85dc6ad9aa", 00:09:48.895 "assigned_rate_limits": { 00:09:48.895 "rw_ios_per_sec": 0, 00:09:48.895 "rw_mbytes_per_sec": 0, 00:09:48.895 "r_mbytes_per_sec": 0, 00:09:48.895 "w_mbytes_per_sec": 0 00:09:48.895 }, 00:09:48.895 "claimed": false, 00:09:48.895 "zoned": false, 00:09:48.895 "supported_io_types": { 00:09:48.895 "read": true, 00:09:48.895 "write": true, 00:09:48.895 "unmap": true, 00:09:48.895 "flush": true, 00:09:48.895 "reset": true, 00:09:48.895 "nvme_admin": false, 00:09:48.895 "nvme_io": false, 00:09:48.895 "nvme_io_md": false, 00:09:48.895 "write_zeroes": true, 00:09:48.895 "zcopy": true, 00:09:48.895 "get_zone_info": false, 00:09:48.895 "zone_management": false, 00:09:48.895 "zone_append": false, 00:09:48.895 "compare": false, 00:09:48.895 "compare_and_write": false, 00:09:48.895 "abort": true, 00:09:48.895 "seek_hole": false, 00:09:48.895 "seek_data": false, 00:09:48.895 "copy": true, 00:09:48.895 "nvme_iov_md": false 00:09:48.895 }, 00:09:48.895 "memory_domains": [ 00:09:48.895 { 00:09:48.895 "dma_device_id": "system", 00:09:48.895 "dma_device_type": 1 00:09:48.895 }, 00:09:48.895 { 00:09:48.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.895 "dma_device_type": 2 00:09:48.895 } 00:09:48.895 ], 00:09:48.895 "driver_specific": {} 00:09:48.895 } 00:09:48.895 ] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.895 BaseBdev4 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:48.895 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.896 [ 00:09:48.896 { 00:09:48.896 "name": "BaseBdev4", 00:09:48.896 "aliases": [ 00:09:48.896 "ced2f79d-f52f-4bc4-99f7-1e8f01053e44" 00:09:48.896 ], 00:09:48.896 "product_name": "Malloc disk", 00:09:48.896 "block_size": 512, 00:09:48.896 "num_blocks": 65536, 00:09:48.896 "uuid": "ced2f79d-f52f-4bc4-99f7-1e8f01053e44", 00:09:48.896 "assigned_rate_limits": { 00:09:48.896 "rw_ios_per_sec": 0, 00:09:48.896 "rw_mbytes_per_sec": 0, 00:09:48.896 "r_mbytes_per_sec": 0, 00:09:48.896 "w_mbytes_per_sec": 0 00:09:48.896 }, 00:09:48.896 "claimed": false, 00:09:48.896 "zoned": false, 00:09:48.896 "supported_io_types": { 00:09:48.896 "read": true, 00:09:48.896 "write": true, 00:09:48.896 "unmap": true, 00:09:48.896 "flush": true, 00:09:48.896 "reset": true, 00:09:48.896 "nvme_admin": false, 00:09:48.896 "nvme_io": false, 00:09:48.896 "nvme_io_md": false, 00:09:48.896 "write_zeroes": true, 00:09:48.896 "zcopy": true, 00:09:48.896 "get_zone_info": false, 00:09:48.896 "zone_management": false, 00:09:48.896 "zone_append": false, 00:09:48.896 "compare": false, 00:09:48.896 "compare_and_write": false, 00:09:48.896 "abort": true, 00:09:48.896 "seek_hole": false, 00:09:48.896 "seek_data": false, 00:09:48.896 "copy": true, 00:09:48.896 "nvme_iov_md": false 00:09:48.896 }, 00:09:48.896 "memory_domains": [ 00:09:48.896 { 00:09:48.896 "dma_device_id": "system", 00:09:48.896 "dma_device_type": 1 00:09:48.896 }, 00:09:48.896 { 00:09:48.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.896 "dma_device_type": 2 00:09:48.896 } 00:09:48.896 ], 00:09:48.896 "driver_specific": {} 00:09:48.896 } 00:09:48.896 ] 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.896 [2024-11-28 02:25:22.544647] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.896 [2024-11-28 02:25:22.544800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.896 [2024-11-28 02:25:22.544857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.896 [2024-11-28 02:25:22.547138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.896 [2024-11-28 02:25:22.547244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.896 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.155 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.155 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.155 "name": "Existed_Raid", 00:09:49.155 "uuid": "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb", 00:09:49.155 "strip_size_kb": 64, 00:09:49.155 "state": "configuring", 00:09:49.155 "raid_level": "raid0", 00:09:49.155 "superblock": true, 00:09:49.155 "num_base_bdevs": 4, 00:09:49.155 "num_base_bdevs_discovered": 3, 00:09:49.155 "num_base_bdevs_operational": 4, 00:09:49.155 "base_bdevs_list": [ 00:09:49.155 { 00:09:49.155 "name": "BaseBdev1", 00:09:49.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.155 "is_configured": false, 00:09:49.155 "data_offset": 0, 00:09:49.155 "data_size": 0 00:09:49.155 }, 00:09:49.155 { 00:09:49.155 "name": "BaseBdev2", 00:09:49.155 "uuid": "cea83ba4-37f9-4a7c-836a-aef038439cb7", 00:09:49.155 "is_configured": true, 00:09:49.155 "data_offset": 2048, 00:09:49.155 "data_size": 63488 00:09:49.155 }, 00:09:49.155 { 00:09:49.155 "name": "BaseBdev3", 00:09:49.155 "uuid": "227e7804-9d97-4567-9943-6a85dc6ad9aa", 00:09:49.155 "is_configured": true, 00:09:49.155 "data_offset": 2048, 00:09:49.155 "data_size": 63488 00:09:49.155 }, 00:09:49.155 { 00:09:49.155 "name": "BaseBdev4", 00:09:49.155 "uuid": "ced2f79d-f52f-4bc4-99f7-1e8f01053e44", 00:09:49.155 "is_configured": true, 00:09:49.155 "data_offset": 2048, 00:09:49.155 "data_size": 63488 00:09:49.155 } 00:09:49.155 ] 00:09:49.155 }' 00:09:49.155 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.155 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.414 [2024-11-28 02:25:22.987831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.414 02:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.414 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.414 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.414 "name": "Existed_Raid", 00:09:49.414 "uuid": "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb", 00:09:49.414 "strip_size_kb": 64, 00:09:49.414 "state": "configuring", 00:09:49.414 "raid_level": "raid0", 00:09:49.414 "superblock": true, 00:09:49.414 "num_base_bdevs": 4, 00:09:49.414 "num_base_bdevs_discovered": 2, 00:09:49.414 "num_base_bdevs_operational": 4, 00:09:49.414 "base_bdevs_list": [ 00:09:49.414 { 00:09:49.414 "name": "BaseBdev1", 00:09:49.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.414 "is_configured": false, 00:09:49.414 "data_offset": 0, 00:09:49.414 "data_size": 0 00:09:49.414 }, 00:09:49.414 { 00:09:49.414 "name": null, 00:09:49.414 "uuid": "cea83ba4-37f9-4a7c-836a-aef038439cb7", 00:09:49.414 "is_configured": false, 00:09:49.414 "data_offset": 0, 00:09:49.414 "data_size": 63488 00:09:49.414 }, 00:09:49.414 { 00:09:49.414 "name": "BaseBdev3", 00:09:49.414 "uuid": "227e7804-9d97-4567-9943-6a85dc6ad9aa", 00:09:49.414 "is_configured": true, 00:09:49.414 "data_offset": 2048, 00:09:49.414 "data_size": 63488 00:09:49.414 }, 00:09:49.414 { 00:09:49.414 "name": "BaseBdev4", 00:09:49.414 "uuid": "ced2f79d-f52f-4bc4-99f7-1e8f01053e44", 00:09:49.414 "is_configured": true, 00:09:49.414 "data_offset": 2048, 00:09:49.415 "data_size": 63488 00:09:49.415 } 00:09:49.415 ] 00:09:49.415 }' 00:09:49.415 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.415 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.676 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:49.676 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.676 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.676 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.935 [2024-11-28 02:25:23.437894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.935 BaseBdev1 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.935 [ 00:09:49.935 { 00:09:49.935 "name": "BaseBdev1", 00:09:49.935 "aliases": [ 00:09:49.935 "6f4bbe62-6129-416f-a40d-60cbf0183de7" 00:09:49.935 ], 00:09:49.935 "product_name": "Malloc disk", 00:09:49.935 "block_size": 512, 00:09:49.935 "num_blocks": 65536, 00:09:49.935 "uuid": "6f4bbe62-6129-416f-a40d-60cbf0183de7", 00:09:49.935 "assigned_rate_limits": { 00:09:49.935 "rw_ios_per_sec": 0, 00:09:49.935 "rw_mbytes_per_sec": 0, 00:09:49.935 "r_mbytes_per_sec": 0, 00:09:49.935 "w_mbytes_per_sec": 0 00:09:49.935 }, 00:09:49.935 "claimed": true, 00:09:49.935 "claim_type": "exclusive_write", 00:09:49.935 "zoned": false, 00:09:49.935 "supported_io_types": { 00:09:49.935 "read": true, 00:09:49.935 "write": true, 00:09:49.935 "unmap": true, 00:09:49.935 "flush": true, 00:09:49.935 "reset": true, 00:09:49.935 "nvme_admin": false, 00:09:49.935 "nvme_io": false, 00:09:49.935 "nvme_io_md": false, 00:09:49.935 "write_zeroes": true, 00:09:49.935 "zcopy": true, 00:09:49.935 "get_zone_info": false, 00:09:49.935 "zone_management": false, 00:09:49.935 "zone_append": false, 00:09:49.935 "compare": false, 00:09:49.935 "compare_and_write": false, 00:09:49.935 "abort": true, 00:09:49.935 "seek_hole": false, 00:09:49.935 "seek_data": false, 00:09:49.935 "copy": true, 00:09:49.935 "nvme_iov_md": false 00:09:49.935 }, 00:09:49.935 "memory_domains": [ 00:09:49.935 { 00:09:49.935 "dma_device_id": "system", 00:09:49.935 "dma_device_type": 1 00:09:49.935 }, 00:09:49.935 { 00:09:49.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.935 "dma_device_type": 2 00:09:49.935 } 00:09:49.935 ], 00:09:49.935 "driver_specific": {} 00:09:49.935 } 00:09:49.935 ] 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.935 "name": "Existed_Raid", 00:09:49.935 "uuid": "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb", 00:09:49.935 "strip_size_kb": 64, 00:09:49.935 "state": "configuring", 00:09:49.935 "raid_level": "raid0", 00:09:49.935 "superblock": true, 00:09:49.935 "num_base_bdevs": 4, 00:09:49.935 "num_base_bdevs_discovered": 3, 00:09:49.935 "num_base_bdevs_operational": 4, 00:09:49.935 "base_bdevs_list": [ 00:09:49.935 { 00:09:49.935 "name": "BaseBdev1", 00:09:49.935 "uuid": "6f4bbe62-6129-416f-a40d-60cbf0183de7", 00:09:49.935 "is_configured": true, 00:09:49.935 "data_offset": 2048, 00:09:49.935 "data_size": 63488 00:09:49.935 }, 00:09:49.935 { 00:09:49.935 "name": null, 00:09:49.935 "uuid": "cea83ba4-37f9-4a7c-836a-aef038439cb7", 00:09:49.935 "is_configured": false, 00:09:49.935 "data_offset": 0, 00:09:49.935 "data_size": 63488 00:09:49.935 }, 00:09:49.935 { 00:09:49.935 "name": "BaseBdev3", 00:09:49.935 "uuid": "227e7804-9d97-4567-9943-6a85dc6ad9aa", 00:09:49.935 "is_configured": true, 00:09:49.935 "data_offset": 2048, 00:09:49.935 "data_size": 63488 00:09:49.935 }, 00:09:49.935 { 00:09:49.935 "name": "BaseBdev4", 00:09:49.935 "uuid": "ced2f79d-f52f-4bc4-99f7-1e8f01053e44", 00:09:49.935 "is_configured": true, 00:09:49.935 "data_offset": 2048, 00:09:49.935 "data_size": 63488 00:09:49.935 } 00:09:49.935 ] 00:09:49.935 }' 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.935 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.505 [2024-11-28 02:25:23.981069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.505 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.505 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.505 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.505 "name": "Existed_Raid", 00:09:50.505 "uuid": "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb", 00:09:50.505 "strip_size_kb": 64, 00:09:50.505 "state": "configuring", 00:09:50.505 "raid_level": "raid0", 00:09:50.505 "superblock": true, 00:09:50.505 "num_base_bdevs": 4, 00:09:50.505 "num_base_bdevs_discovered": 2, 00:09:50.505 "num_base_bdevs_operational": 4, 00:09:50.505 "base_bdevs_list": [ 00:09:50.505 { 00:09:50.505 "name": "BaseBdev1", 00:09:50.505 "uuid": "6f4bbe62-6129-416f-a40d-60cbf0183de7", 00:09:50.505 "is_configured": true, 00:09:50.505 "data_offset": 2048, 00:09:50.505 "data_size": 63488 00:09:50.505 }, 00:09:50.505 { 00:09:50.505 "name": null, 00:09:50.505 "uuid": "cea83ba4-37f9-4a7c-836a-aef038439cb7", 00:09:50.505 "is_configured": false, 00:09:50.505 "data_offset": 0, 00:09:50.505 "data_size": 63488 00:09:50.505 }, 00:09:50.505 { 00:09:50.505 "name": null, 00:09:50.505 "uuid": "227e7804-9d97-4567-9943-6a85dc6ad9aa", 00:09:50.505 "is_configured": false, 00:09:50.505 "data_offset": 0, 00:09:50.505 "data_size": 63488 00:09:50.505 }, 00:09:50.505 { 00:09:50.505 "name": "BaseBdev4", 00:09:50.505 "uuid": "ced2f79d-f52f-4bc4-99f7-1e8f01053e44", 00:09:50.505 "is_configured": true, 00:09:50.505 "data_offset": 2048, 00:09:50.505 "data_size": 63488 00:09:50.505 } 00:09:50.505 ] 00:09:50.505 }' 00:09:50.505 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.505 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.764 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.764 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.764 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.764 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:50.764 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.022 [2024-11-28 02:25:24.464252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.022 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.022 "name": "Existed_Raid", 00:09:51.022 "uuid": "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb", 00:09:51.022 "strip_size_kb": 64, 00:09:51.022 "state": "configuring", 00:09:51.022 "raid_level": "raid0", 00:09:51.022 "superblock": true, 00:09:51.022 "num_base_bdevs": 4, 00:09:51.022 "num_base_bdevs_discovered": 3, 00:09:51.022 "num_base_bdevs_operational": 4, 00:09:51.022 "base_bdevs_list": [ 00:09:51.022 { 00:09:51.022 "name": "BaseBdev1", 00:09:51.022 "uuid": "6f4bbe62-6129-416f-a40d-60cbf0183de7", 00:09:51.022 "is_configured": true, 00:09:51.023 "data_offset": 2048, 00:09:51.023 "data_size": 63488 00:09:51.023 }, 00:09:51.023 { 00:09:51.023 "name": null, 00:09:51.023 "uuid": "cea83ba4-37f9-4a7c-836a-aef038439cb7", 00:09:51.023 "is_configured": false, 00:09:51.023 "data_offset": 0, 00:09:51.023 "data_size": 63488 00:09:51.023 }, 00:09:51.023 { 00:09:51.023 "name": "BaseBdev3", 00:09:51.023 "uuid": "227e7804-9d97-4567-9943-6a85dc6ad9aa", 00:09:51.023 "is_configured": true, 00:09:51.023 "data_offset": 2048, 00:09:51.023 "data_size": 63488 00:09:51.023 }, 00:09:51.023 { 00:09:51.023 "name": "BaseBdev4", 00:09:51.023 "uuid": "ced2f79d-f52f-4bc4-99f7-1e8f01053e44", 00:09:51.023 "is_configured": true, 00:09:51.023 "data_offset": 2048, 00:09:51.023 "data_size": 63488 00:09:51.023 } 00:09:51.023 ] 00:09:51.023 }' 00:09:51.023 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.023 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.282 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.282 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.282 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.282 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.282 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.282 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:51.282 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:51.282 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.282 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.282 [2024-11-28 02:25:24.931524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.542 "name": "Existed_Raid", 00:09:51.542 "uuid": "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb", 00:09:51.542 "strip_size_kb": 64, 00:09:51.542 "state": "configuring", 00:09:51.542 "raid_level": "raid0", 00:09:51.542 "superblock": true, 00:09:51.542 "num_base_bdevs": 4, 00:09:51.542 "num_base_bdevs_discovered": 2, 00:09:51.542 "num_base_bdevs_operational": 4, 00:09:51.542 "base_bdevs_list": [ 00:09:51.542 { 00:09:51.542 "name": null, 00:09:51.542 "uuid": "6f4bbe62-6129-416f-a40d-60cbf0183de7", 00:09:51.542 "is_configured": false, 00:09:51.542 "data_offset": 0, 00:09:51.542 "data_size": 63488 00:09:51.542 }, 00:09:51.542 { 00:09:51.542 "name": null, 00:09:51.542 "uuid": "cea83ba4-37f9-4a7c-836a-aef038439cb7", 00:09:51.542 "is_configured": false, 00:09:51.542 "data_offset": 0, 00:09:51.542 "data_size": 63488 00:09:51.542 }, 00:09:51.542 { 00:09:51.542 "name": "BaseBdev3", 00:09:51.542 "uuid": "227e7804-9d97-4567-9943-6a85dc6ad9aa", 00:09:51.542 "is_configured": true, 00:09:51.542 "data_offset": 2048, 00:09:51.542 "data_size": 63488 00:09:51.542 }, 00:09:51.542 { 00:09:51.542 "name": "BaseBdev4", 00:09:51.542 "uuid": "ced2f79d-f52f-4bc4-99f7-1e8f01053e44", 00:09:51.542 "is_configured": true, 00:09:51.542 "data_offset": 2048, 00:09:51.542 "data_size": 63488 00:09:51.542 } 00:09:51.542 ] 00:09:51.542 }' 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.542 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.110 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.110 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.110 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:52.110 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:52.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.111 [2024-11-28 02:25:25.544090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.111 "name": "Existed_Raid", 00:09:52.111 "uuid": "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb", 00:09:52.111 "strip_size_kb": 64, 00:09:52.111 "state": "configuring", 00:09:52.111 "raid_level": "raid0", 00:09:52.111 "superblock": true, 00:09:52.111 "num_base_bdevs": 4, 00:09:52.111 "num_base_bdevs_discovered": 3, 00:09:52.111 "num_base_bdevs_operational": 4, 00:09:52.111 "base_bdevs_list": [ 00:09:52.111 { 00:09:52.111 "name": null, 00:09:52.111 "uuid": "6f4bbe62-6129-416f-a40d-60cbf0183de7", 00:09:52.111 "is_configured": false, 00:09:52.111 "data_offset": 0, 00:09:52.111 "data_size": 63488 00:09:52.111 }, 00:09:52.111 { 00:09:52.111 "name": "BaseBdev2", 00:09:52.111 "uuid": "cea83ba4-37f9-4a7c-836a-aef038439cb7", 00:09:52.111 "is_configured": true, 00:09:52.111 "data_offset": 2048, 00:09:52.111 "data_size": 63488 00:09:52.111 }, 00:09:52.111 { 00:09:52.111 "name": "BaseBdev3", 00:09:52.111 "uuid": "227e7804-9d97-4567-9943-6a85dc6ad9aa", 00:09:52.111 "is_configured": true, 00:09:52.111 "data_offset": 2048, 00:09:52.111 "data_size": 63488 00:09:52.111 }, 00:09:52.111 { 00:09:52.111 "name": "BaseBdev4", 00:09:52.111 "uuid": "ced2f79d-f52f-4bc4-99f7-1e8f01053e44", 00:09:52.111 "is_configured": true, 00:09:52.111 "data_offset": 2048, 00:09:52.111 "data_size": 63488 00:09:52.111 } 00:09:52.111 ] 00:09:52.111 }' 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.111 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.370 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.370 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.370 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.370 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.370 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:52.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:52.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6f4bbe62-6129-416f-a40d-60cbf0183de7 00:09:52.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.630 [2024-11-28 02:25:26.129249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:52.630 [2024-11-28 02:25:26.129530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:52.630 [2024-11-28 02:25:26.129543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:52.630 [2024-11-28 02:25:26.129856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:52.630 NewBaseBdev 00:09:52.630 [2024-11-28 02:25:26.130035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:52.630 [2024-11-28 02:25:26.130049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:52.631 [2024-11-28 02:25:26.130213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.631 [ 00:09:52.631 { 00:09:52.631 "name": "NewBaseBdev", 00:09:52.631 "aliases": [ 00:09:52.631 "6f4bbe62-6129-416f-a40d-60cbf0183de7" 00:09:52.631 ], 00:09:52.631 "product_name": "Malloc disk", 00:09:52.631 "block_size": 512, 00:09:52.631 "num_blocks": 65536, 00:09:52.631 "uuid": "6f4bbe62-6129-416f-a40d-60cbf0183de7", 00:09:52.631 "assigned_rate_limits": { 00:09:52.631 "rw_ios_per_sec": 0, 00:09:52.631 "rw_mbytes_per_sec": 0, 00:09:52.631 "r_mbytes_per_sec": 0, 00:09:52.631 "w_mbytes_per_sec": 0 00:09:52.631 }, 00:09:52.631 "claimed": true, 00:09:52.631 "claim_type": "exclusive_write", 00:09:52.631 "zoned": false, 00:09:52.631 "supported_io_types": { 00:09:52.631 "read": true, 00:09:52.631 "write": true, 00:09:52.631 "unmap": true, 00:09:52.631 "flush": true, 00:09:52.631 "reset": true, 00:09:52.631 "nvme_admin": false, 00:09:52.631 "nvme_io": false, 00:09:52.631 "nvme_io_md": false, 00:09:52.631 "write_zeroes": true, 00:09:52.631 "zcopy": true, 00:09:52.631 "get_zone_info": false, 00:09:52.631 "zone_management": false, 00:09:52.631 "zone_append": false, 00:09:52.631 "compare": false, 00:09:52.631 "compare_and_write": false, 00:09:52.631 "abort": true, 00:09:52.631 "seek_hole": false, 00:09:52.631 "seek_data": false, 00:09:52.631 "copy": true, 00:09:52.631 "nvme_iov_md": false 00:09:52.631 }, 00:09:52.631 "memory_domains": [ 00:09:52.631 { 00:09:52.631 "dma_device_id": "system", 00:09:52.631 "dma_device_type": 1 00:09:52.631 }, 00:09:52.631 { 00:09:52.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.631 "dma_device_type": 2 00:09:52.631 } 00:09:52.631 ], 00:09:52.631 "driver_specific": {} 00:09:52.631 } 00:09:52.631 ] 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.631 "name": "Existed_Raid", 00:09:52.631 "uuid": "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb", 00:09:52.631 "strip_size_kb": 64, 00:09:52.631 "state": "online", 00:09:52.631 "raid_level": "raid0", 00:09:52.631 "superblock": true, 00:09:52.631 "num_base_bdevs": 4, 00:09:52.631 "num_base_bdevs_discovered": 4, 00:09:52.631 "num_base_bdevs_operational": 4, 00:09:52.631 "base_bdevs_list": [ 00:09:52.631 { 00:09:52.631 "name": "NewBaseBdev", 00:09:52.631 "uuid": "6f4bbe62-6129-416f-a40d-60cbf0183de7", 00:09:52.631 "is_configured": true, 00:09:52.631 "data_offset": 2048, 00:09:52.631 "data_size": 63488 00:09:52.631 }, 00:09:52.631 { 00:09:52.631 "name": "BaseBdev2", 00:09:52.631 "uuid": "cea83ba4-37f9-4a7c-836a-aef038439cb7", 00:09:52.631 "is_configured": true, 00:09:52.631 "data_offset": 2048, 00:09:52.631 "data_size": 63488 00:09:52.631 }, 00:09:52.631 { 00:09:52.631 "name": "BaseBdev3", 00:09:52.631 "uuid": "227e7804-9d97-4567-9943-6a85dc6ad9aa", 00:09:52.631 "is_configured": true, 00:09:52.631 "data_offset": 2048, 00:09:52.631 "data_size": 63488 00:09:52.631 }, 00:09:52.631 { 00:09:52.631 "name": "BaseBdev4", 00:09:52.631 "uuid": "ced2f79d-f52f-4bc4-99f7-1e8f01053e44", 00:09:52.631 "is_configured": true, 00:09:52.631 "data_offset": 2048, 00:09:52.631 "data_size": 63488 00:09:52.631 } 00:09:52.631 ] 00:09:52.631 }' 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.631 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.202 [2024-11-28 02:25:26.652840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.202 "name": "Existed_Raid", 00:09:53.202 "aliases": [ 00:09:53.202 "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb" 00:09:53.202 ], 00:09:53.202 "product_name": "Raid Volume", 00:09:53.202 "block_size": 512, 00:09:53.202 "num_blocks": 253952, 00:09:53.202 "uuid": "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb", 00:09:53.202 "assigned_rate_limits": { 00:09:53.202 "rw_ios_per_sec": 0, 00:09:53.202 "rw_mbytes_per_sec": 0, 00:09:53.202 "r_mbytes_per_sec": 0, 00:09:53.202 "w_mbytes_per_sec": 0 00:09:53.202 }, 00:09:53.202 "claimed": false, 00:09:53.202 "zoned": false, 00:09:53.202 "supported_io_types": { 00:09:53.202 "read": true, 00:09:53.202 "write": true, 00:09:53.202 "unmap": true, 00:09:53.202 "flush": true, 00:09:53.202 "reset": true, 00:09:53.202 "nvme_admin": false, 00:09:53.202 "nvme_io": false, 00:09:53.202 "nvme_io_md": false, 00:09:53.202 "write_zeroes": true, 00:09:53.202 "zcopy": false, 00:09:53.202 "get_zone_info": false, 00:09:53.202 "zone_management": false, 00:09:53.202 "zone_append": false, 00:09:53.202 "compare": false, 00:09:53.202 "compare_and_write": false, 00:09:53.202 "abort": false, 00:09:53.202 "seek_hole": false, 00:09:53.202 "seek_data": false, 00:09:53.202 "copy": false, 00:09:53.202 "nvme_iov_md": false 00:09:53.202 }, 00:09:53.202 "memory_domains": [ 00:09:53.202 { 00:09:53.202 "dma_device_id": "system", 00:09:53.202 "dma_device_type": 1 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.202 "dma_device_type": 2 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "dma_device_id": "system", 00:09:53.202 "dma_device_type": 1 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.202 "dma_device_type": 2 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "dma_device_id": "system", 00:09:53.202 "dma_device_type": 1 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.202 "dma_device_type": 2 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "dma_device_id": "system", 00:09:53.202 "dma_device_type": 1 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.202 "dma_device_type": 2 00:09:53.202 } 00:09:53.202 ], 00:09:53.202 "driver_specific": { 00:09:53.202 "raid": { 00:09:53.202 "uuid": "7736a8f6-7f07-4ca1-a555-4bb4ac22d1cb", 00:09:53.202 "strip_size_kb": 64, 00:09:53.202 "state": "online", 00:09:53.202 "raid_level": "raid0", 00:09:53.202 "superblock": true, 00:09:53.202 "num_base_bdevs": 4, 00:09:53.202 "num_base_bdevs_discovered": 4, 00:09:53.202 "num_base_bdevs_operational": 4, 00:09:53.202 "base_bdevs_list": [ 00:09:53.202 { 00:09:53.202 "name": "NewBaseBdev", 00:09:53.202 "uuid": "6f4bbe62-6129-416f-a40d-60cbf0183de7", 00:09:53.202 "is_configured": true, 00:09:53.202 "data_offset": 2048, 00:09:53.202 "data_size": 63488 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "name": "BaseBdev2", 00:09:53.202 "uuid": "cea83ba4-37f9-4a7c-836a-aef038439cb7", 00:09:53.202 "is_configured": true, 00:09:53.202 "data_offset": 2048, 00:09:53.202 "data_size": 63488 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "name": "BaseBdev3", 00:09:53.202 "uuid": "227e7804-9d97-4567-9943-6a85dc6ad9aa", 00:09:53.202 "is_configured": true, 00:09:53.202 "data_offset": 2048, 00:09:53.202 "data_size": 63488 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "name": "BaseBdev4", 00:09:53.202 "uuid": "ced2f79d-f52f-4bc4-99f7-1e8f01053e44", 00:09:53.202 "is_configured": true, 00:09:53.202 "data_offset": 2048, 00:09:53.202 "data_size": 63488 00:09:53.202 } 00:09:53.202 ] 00:09:53.202 } 00:09:53.202 } 00:09:53.202 }' 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:53.202 BaseBdev2 00:09:53.202 BaseBdev3 00:09:53.202 BaseBdev4' 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.202 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.203 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.462 [2024-11-28 02:25:26.951890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.462 [2024-11-28 02:25:26.951947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.462 [2024-11-28 02:25:26.952034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.462 [2024-11-28 02:25:26.952112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.462 [2024-11-28 02:25:26.952123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69843 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69843 ']' 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69843 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69843 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.462 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69843' 00:09:53.463 killing process with pid 69843 00:09:53.463 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69843 00:09:53.463 [2024-11-28 02:25:27.000342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.463 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69843 00:09:54.032 [2024-11-28 02:25:27.428047] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.411 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:55.411 00:09:55.411 real 0m11.680s 00:09:55.411 user 0m18.234s 00:09:55.411 sys 0m2.220s 00:09:55.411 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.411 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.411 ************************************ 00:09:55.411 END TEST raid_state_function_test_sb 00:09:55.411 ************************************ 00:09:55.411 02:25:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:55.411 02:25:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:55.411 02:25:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.411 02:25:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.411 ************************************ 00:09:55.411 START TEST raid_superblock_test 00:09:55.411 ************************************ 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:55.411 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70513 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70513 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70513 ']' 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.412 02:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.412 [2024-11-28 02:25:28.814628] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:55.412 [2024-11-28 02:25:28.814764] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70513 ] 00:09:55.412 [2024-11-28 02:25:28.968013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.671 [2024-11-28 02:25:29.103004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.671 [2024-11-28 02:25:29.333991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.671 [2024-11-28 02:25:29.334054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.241 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.241 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:56.241 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:56.241 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:56.241 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:56.241 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:56.241 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.242 malloc1 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.242 [2024-11-28 02:25:29.699706] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:56.242 [2024-11-28 02:25:29.699775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.242 [2024-11-28 02:25:29.699798] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:56.242 [2024-11-28 02:25:29.699807] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.242 [2024-11-28 02:25:29.702235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.242 [2024-11-28 02:25:29.702266] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:56.242 pt1 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.242 malloc2 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.242 [2024-11-28 02:25:29.760097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.242 [2024-11-28 02:25:29.760154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.242 [2024-11-28 02:25:29.760184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:56.242 [2024-11-28 02:25:29.760194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.242 [2024-11-28 02:25:29.762615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.242 [2024-11-28 02:25:29.762646] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.242 pt2 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.242 malloc3 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.242 [2024-11-28 02:25:29.837982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:56.242 [2024-11-28 02:25:29.838039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.242 [2024-11-28 02:25:29.838062] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:56.242 [2024-11-28 02:25:29.838072] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.242 [2024-11-28 02:25:29.840474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.242 [2024-11-28 02:25:29.840508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:56.242 pt3 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.242 malloc4 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.242 [2024-11-28 02:25:29.900086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:56.242 [2024-11-28 02:25:29.900153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.242 [2024-11-28 02:25:29.900176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:56.242 [2024-11-28 02:25:29.900186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.242 [2024-11-28 02:25:29.902570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.242 [2024-11-28 02:25:29.902602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:56.242 pt4 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.242 [2024-11-28 02:25:29.912109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:56.242 [2024-11-28 02:25:29.914193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.242 [2024-11-28 02:25:29.914283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:56.242 [2024-11-28 02:25:29.914334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:56.242 [2024-11-28 02:25:29.914539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:56.242 [2024-11-28 02:25:29.914559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:56.242 [2024-11-28 02:25:29.914850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:56.242 [2024-11-28 02:25:29.915082] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:56.242 [2024-11-28 02:25:29.915102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:56.242 [2024-11-28 02:25:29.915260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.242 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.503 "name": "raid_bdev1", 00:09:56.503 "uuid": "fdbd9d82-f686-413f-9f60-63ac87cba7de", 00:09:56.503 "strip_size_kb": 64, 00:09:56.503 "state": "online", 00:09:56.503 "raid_level": "raid0", 00:09:56.503 "superblock": true, 00:09:56.503 "num_base_bdevs": 4, 00:09:56.503 "num_base_bdevs_discovered": 4, 00:09:56.503 "num_base_bdevs_operational": 4, 00:09:56.503 "base_bdevs_list": [ 00:09:56.503 { 00:09:56.503 "name": "pt1", 00:09:56.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.503 "is_configured": true, 00:09:56.503 "data_offset": 2048, 00:09:56.503 "data_size": 63488 00:09:56.503 }, 00:09:56.503 { 00:09:56.503 "name": "pt2", 00:09:56.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.503 "is_configured": true, 00:09:56.503 "data_offset": 2048, 00:09:56.503 "data_size": 63488 00:09:56.503 }, 00:09:56.503 { 00:09:56.503 "name": "pt3", 00:09:56.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.503 "is_configured": true, 00:09:56.503 "data_offset": 2048, 00:09:56.503 "data_size": 63488 00:09:56.503 }, 00:09:56.503 { 00:09:56.503 "name": "pt4", 00:09:56.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:56.503 "is_configured": true, 00:09:56.503 "data_offset": 2048, 00:09:56.503 "data_size": 63488 00:09:56.503 } 00:09:56.503 ] 00:09:56.503 }' 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.503 02:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.763 [2024-11-28 02:25:30.343897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.763 "name": "raid_bdev1", 00:09:56.763 "aliases": [ 00:09:56.763 "fdbd9d82-f686-413f-9f60-63ac87cba7de" 00:09:56.763 ], 00:09:56.763 "product_name": "Raid Volume", 00:09:56.763 "block_size": 512, 00:09:56.763 "num_blocks": 253952, 00:09:56.763 "uuid": "fdbd9d82-f686-413f-9f60-63ac87cba7de", 00:09:56.763 "assigned_rate_limits": { 00:09:56.763 "rw_ios_per_sec": 0, 00:09:56.763 "rw_mbytes_per_sec": 0, 00:09:56.763 "r_mbytes_per_sec": 0, 00:09:56.763 "w_mbytes_per_sec": 0 00:09:56.763 }, 00:09:56.763 "claimed": false, 00:09:56.763 "zoned": false, 00:09:56.763 "supported_io_types": { 00:09:56.763 "read": true, 00:09:56.763 "write": true, 00:09:56.763 "unmap": true, 00:09:56.763 "flush": true, 00:09:56.763 "reset": true, 00:09:56.763 "nvme_admin": false, 00:09:56.763 "nvme_io": false, 00:09:56.763 "nvme_io_md": false, 00:09:56.763 "write_zeroes": true, 00:09:56.763 "zcopy": false, 00:09:56.763 "get_zone_info": false, 00:09:56.763 "zone_management": false, 00:09:56.763 "zone_append": false, 00:09:56.763 "compare": false, 00:09:56.763 "compare_and_write": false, 00:09:56.763 "abort": false, 00:09:56.763 "seek_hole": false, 00:09:56.763 "seek_data": false, 00:09:56.763 "copy": false, 00:09:56.763 "nvme_iov_md": false 00:09:56.763 }, 00:09:56.763 "memory_domains": [ 00:09:56.763 { 00:09:56.763 "dma_device_id": "system", 00:09:56.763 "dma_device_type": 1 00:09:56.763 }, 00:09:56.763 { 00:09:56.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.763 "dma_device_type": 2 00:09:56.763 }, 00:09:56.763 { 00:09:56.763 "dma_device_id": "system", 00:09:56.763 "dma_device_type": 1 00:09:56.763 }, 00:09:56.763 { 00:09:56.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.763 "dma_device_type": 2 00:09:56.763 }, 00:09:56.763 { 00:09:56.763 "dma_device_id": "system", 00:09:56.763 "dma_device_type": 1 00:09:56.763 }, 00:09:56.763 { 00:09:56.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.763 "dma_device_type": 2 00:09:56.763 }, 00:09:56.763 { 00:09:56.763 "dma_device_id": "system", 00:09:56.763 "dma_device_type": 1 00:09:56.763 }, 00:09:56.763 { 00:09:56.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.763 "dma_device_type": 2 00:09:56.763 } 00:09:56.763 ], 00:09:56.763 "driver_specific": { 00:09:56.763 "raid": { 00:09:56.763 "uuid": "fdbd9d82-f686-413f-9f60-63ac87cba7de", 00:09:56.763 "strip_size_kb": 64, 00:09:56.763 "state": "online", 00:09:56.763 "raid_level": "raid0", 00:09:56.763 "superblock": true, 00:09:56.763 "num_base_bdevs": 4, 00:09:56.763 "num_base_bdevs_discovered": 4, 00:09:56.763 "num_base_bdevs_operational": 4, 00:09:56.763 "base_bdevs_list": [ 00:09:56.763 { 00:09:56.763 "name": "pt1", 00:09:56.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.763 "is_configured": true, 00:09:56.763 "data_offset": 2048, 00:09:56.763 "data_size": 63488 00:09:56.763 }, 00:09:56.763 { 00:09:56.763 "name": "pt2", 00:09:56.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.763 "is_configured": true, 00:09:56.763 "data_offset": 2048, 00:09:56.763 "data_size": 63488 00:09:56.763 }, 00:09:56.763 { 00:09:56.763 "name": "pt3", 00:09:56.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.763 "is_configured": true, 00:09:56.763 "data_offset": 2048, 00:09:56.763 "data_size": 63488 00:09:56.763 }, 00:09:56.763 { 00:09:56.763 "name": "pt4", 00:09:56.763 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:56.763 "is_configured": true, 00:09:56.763 "data_offset": 2048, 00:09:56.763 "data_size": 63488 00:09:56.763 } 00:09:56.763 ] 00:09:56.763 } 00:09:56.763 } 00:09:56.763 }' 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:56.763 pt2 00:09:56.763 pt3 00:09:56.763 pt4' 00:09:56.763 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.023 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:57.024 [2024-11-28 02:25:30.675206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.024 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fdbd9d82-f686-413f-9f60-63ac87cba7de 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fdbd9d82-f686-413f-9f60-63ac87cba7de ']' 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.284 [2024-11-28 02:25:30.706895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.284 [2024-11-28 02:25:30.706937] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.284 [2024-11-28 02:25:30.707044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.284 [2024-11-28 02:25:30.707128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.284 [2024-11-28 02:25:30.707148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.284 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.284 [2024-11-28 02:25:30.870632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:57.284 [2024-11-28 02:25:30.872784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:57.284 [2024-11-28 02:25:30.872836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:57.284 [2024-11-28 02:25:30.872869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:57.284 [2024-11-28 02:25:30.872933] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:57.284 [2024-11-28 02:25:30.872979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:57.284 [2024-11-28 02:25:30.873001] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:57.285 [2024-11-28 02:25:30.873020] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:57.285 [2024-11-28 02:25:30.873033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.285 [2024-11-28 02:25:30.873047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:57.285 request: 00:09:57.285 { 00:09:57.285 "name": "raid_bdev1", 00:09:57.285 "raid_level": "raid0", 00:09:57.285 "base_bdevs": [ 00:09:57.285 "malloc1", 00:09:57.285 "malloc2", 00:09:57.285 "malloc3", 00:09:57.285 "malloc4" 00:09:57.285 ], 00:09:57.285 "strip_size_kb": 64, 00:09:57.285 "superblock": false, 00:09:57.285 "method": "bdev_raid_create", 00:09:57.285 "req_id": 1 00:09:57.285 } 00:09:57.285 Got JSON-RPC error response 00:09:57.285 response: 00:09:57.285 { 00:09:57.285 "code": -17, 00:09:57.285 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:57.285 } 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.285 [2024-11-28 02:25:30.926536] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:57.285 [2024-11-28 02:25:30.926585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.285 [2024-11-28 02:25:30.926605] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:57.285 [2024-11-28 02:25:30.926620] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.285 [2024-11-28 02:25:30.929138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.285 [2024-11-28 02:25:30.929172] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:57.285 [2024-11-28 02:25:30.929243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:57.285 [2024-11-28 02:25:30.929302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:57.285 pt1 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.285 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.544 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.544 "name": "raid_bdev1", 00:09:57.544 "uuid": "fdbd9d82-f686-413f-9f60-63ac87cba7de", 00:09:57.544 "strip_size_kb": 64, 00:09:57.544 "state": "configuring", 00:09:57.544 "raid_level": "raid0", 00:09:57.544 "superblock": true, 00:09:57.545 "num_base_bdevs": 4, 00:09:57.545 "num_base_bdevs_discovered": 1, 00:09:57.545 "num_base_bdevs_operational": 4, 00:09:57.545 "base_bdevs_list": [ 00:09:57.545 { 00:09:57.545 "name": "pt1", 00:09:57.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.545 "is_configured": true, 00:09:57.545 "data_offset": 2048, 00:09:57.545 "data_size": 63488 00:09:57.545 }, 00:09:57.545 { 00:09:57.545 "name": null, 00:09:57.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.545 "is_configured": false, 00:09:57.545 "data_offset": 2048, 00:09:57.545 "data_size": 63488 00:09:57.545 }, 00:09:57.545 { 00:09:57.545 "name": null, 00:09:57.545 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.545 "is_configured": false, 00:09:57.545 "data_offset": 2048, 00:09:57.545 "data_size": 63488 00:09:57.545 }, 00:09:57.545 { 00:09:57.545 "name": null, 00:09:57.545 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:57.545 "is_configured": false, 00:09:57.545 "data_offset": 2048, 00:09:57.545 "data_size": 63488 00:09:57.545 } 00:09:57.545 ] 00:09:57.545 }' 00:09:57.545 02:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.545 02:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.804 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:57.804 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:57.804 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.804 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.804 [2024-11-28 02:25:31.389757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:57.804 [2024-11-28 02:25:31.389844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.804 [2024-11-28 02:25:31.389864] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:57.804 [2024-11-28 02:25:31.389875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.804 [2024-11-28 02:25:31.390368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.804 [2024-11-28 02:25:31.390389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:57.804 [2024-11-28 02:25:31.390473] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:57.804 [2024-11-28 02:25:31.390497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:57.804 pt2 00:09:57.804 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.804 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:57.804 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.804 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.804 [2024-11-28 02:25:31.397764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:57.804 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.805 "name": "raid_bdev1", 00:09:57.805 "uuid": "fdbd9d82-f686-413f-9f60-63ac87cba7de", 00:09:57.805 "strip_size_kb": 64, 00:09:57.805 "state": "configuring", 00:09:57.805 "raid_level": "raid0", 00:09:57.805 "superblock": true, 00:09:57.805 "num_base_bdevs": 4, 00:09:57.805 "num_base_bdevs_discovered": 1, 00:09:57.805 "num_base_bdevs_operational": 4, 00:09:57.805 "base_bdevs_list": [ 00:09:57.805 { 00:09:57.805 "name": "pt1", 00:09:57.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.805 "is_configured": true, 00:09:57.805 "data_offset": 2048, 00:09:57.805 "data_size": 63488 00:09:57.805 }, 00:09:57.805 { 00:09:57.805 "name": null, 00:09:57.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.805 "is_configured": false, 00:09:57.805 "data_offset": 0, 00:09:57.805 "data_size": 63488 00:09:57.805 }, 00:09:57.805 { 00:09:57.805 "name": null, 00:09:57.805 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.805 "is_configured": false, 00:09:57.805 "data_offset": 2048, 00:09:57.805 "data_size": 63488 00:09:57.805 }, 00:09:57.805 { 00:09:57.805 "name": null, 00:09:57.805 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:57.805 "is_configured": false, 00:09:57.805 "data_offset": 2048, 00:09:57.805 "data_size": 63488 00:09:57.805 } 00:09:57.805 ] 00:09:57.805 }' 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.805 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 [2024-11-28 02:25:31.872997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:58.375 [2024-11-28 02:25:31.873070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.375 [2024-11-28 02:25:31.873092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:58.375 [2024-11-28 02:25:31.873102] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.375 [2024-11-28 02:25:31.873600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.375 [2024-11-28 02:25:31.873616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:58.375 [2024-11-28 02:25:31.873718] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:58.375 [2024-11-28 02:25:31.873737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:58.375 pt2 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 [2024-11-28 02:25:31.884926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:58.375 [2024-11-28 02:25:31.884988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.375 [2024-11-28 02:25:31.885008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:58.375 [2024-11-28 02:25:31.885015] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.375 [2024-11-28 02:25:31.885437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.375 [2024-11-28 02:25:31.885459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:58.375 [2024-11-28 02:25:31.885525] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:58.375 [2024-11-28 02:25:31.885550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:58.375 pt3 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 [2024-11-28 02:25:31.896866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:58.375 [2024-11-28 02:25:31.896903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.375 [2024-11-28 02:25:31.896928] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:58.375 [2024-11-28 02:25:31.896936] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.375 [2024-11-28 02:25:31.897331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.375 [2024-11-28 02:25:31.897346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:58.375 [2024-11-28 02:25:31.897403] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:58.375 [2024-11-28 02:25:31.897423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:58.375 [2024-11-28 02:25:31.897549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:58.375 [2024-11-28 02:25:31.897558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:58.375 [2024-11-28 02:25:31.897792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:58.375 [2024-11-28 02:25:31.897975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:58.375 [2024-11-28 02:25:31.898005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:58.375 [2024-11-28 02:25:31.898143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.375 pt4 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.375 "name": "raid_bdev1", 00:09:58.375 "uuid": "fdbd9d82-f686-413f-9f60-63ac87cba7de", 00:09:58.375 "strip_size_kb": 64, 00:09:58.375 "state": "online", 00:09:58.375 "raid_level": "raid0", 00:09:58.375 "superblock": true, 00:09:58.375 "num_base_bdevs": 4, 00:09:58.375 "num_base_bdevs_discovered": 4, 00:09:58.375 "num_base_bdevs_operational": 4, 00:09:58.375 "base_bdevs_list": [ 00:09:58.375 { 00:09:58.375 "name": "pt1", 00:09:58.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.375 "is_configured": true, 00:09:58.375 "data_offset": 2048, 00:09:58.375 "data_size": 63488 00:09:58.375 }, 00:09:58.375 { 00:09:58.375 "name": "pt2", 00:09:58.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.375 "is_configured": true, 00:09:58.375 "data_offset": 2048, 00:09:58.375 "data_size": 63488 00:09:58.375 }, 00:09:58.375 { 00:09:58.375 "name": "pt3", 00:09:58.375 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.375 "is_configured": true, 00:09:58.375 "data_offset": 2048, 00:09:58.375 "data_size": 63488 00:09:58.375 }, 00:09:58.375 { 00:09:58.375 "name": "pt4", 00:09:58.375 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:58.375 "is_configured": true, 00:09:58.375 "data_offset": 2048, 00:09:58.375 "data_size": 63488 00:09:58.375 } 00:09:58.375 ] 00:09:58.375 }' 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.375 02:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.945 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:58.945 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.946 [2024-11-28 02:25:32.388486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.946 "name": "raid_bdev1", 00:09:58.946 "aliases": [ 00:09:58.946 "fdbd9d82-f686-413f-9f60-63ac87cba7de" 00:09:58.946 ], 00:09:58.946 "product_name": "Raid Volume", 00:09:58.946 "block_size": 512, 00:09:58.946 "num_blocks": 253952, 00:09:58.946 "uuid": "fdbd9d82-f686-413f-9f60-63ac87cba7de", 00:09:58.946 "assigned_rate_limits": { 00:09:58.946 "rw_ios_per_sec": 0, 00:09:58.946 "rw_mbytes_per_sec": 0, 00:09:58.946 "r_mbytes_per_sec": 0, 00:09:58.946 "w_mbytes_per_sec": 0 00:09:58.946 }, 00:09:58.946 "claimed": false, 00:09:58.946 "zoned": false, 00:09:58.946 "supported_io_types": { 00:09:58.946 "read": true, 00:09:58.946 "write": true, 00:09:58.946 "unmap": true, 00:09:58.946 "flush": true, 00:09:58.946 "reset": true, 00:09:58.946 "nvme_admin": false, 00:09:58.946 "nvme_io": false, 00:09:58.946 "nvme_io_md": false, 00:09:58.946 "write_zeroes": true, 00:09:58.946 "zcopy": false, 00:09:58.946 "get_zone_info": false, 00:09:58.946 "zone_management": false, 00:09:58.946 "zone_append": false, 00:09:58.946 "compare": false, 00:09:58.946 "compare_and_write": false, 00:09:58.946 "abort": false, 00:09:58.946 "seek_hole": false, 00:09:58.946 "seek_data": false, 00:09:58.946 "copy": false, 00:09:58.946 "nvme_iov_md": false 00:09:58.946 }, 00:09:58.946 "memory_domains": [ 00:09:58.946 { 00:09:58.946 "dma_device_id": "system", 00:09:58.946 "dma_device_type": 1 00:09:58.946 }, 00:09:58.946 { 00:09:58.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.946 "dma_device_type": 2 00:09:58.946 }, 00:09:58.946 { 00:09:58.946 "dma_device_id": "system", 00:09:58.946 "dma_device_type": 1 00:09:58.946 }, 00:09:58.946 { 00:09:58.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.946 "dma_device_type": 2 00:09:58.946 }, 00:09:58.946 { 00:09:58.946 "dma_device_id": "system", 00:09:58.946 "dma_device_type": 1 00:09:58.946 }, 00:09:58.946 { 00:09:58.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.946 "dma_device_type": 2 00:09:58.946 }, 00:09:58.946 { 00:09:58.946 "dma_device_id": "system", 00:09:58.946 "dma_device_type": 1 00:09:58.946 }, 00:09:58.946 { 00:09:58.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.946 "dma_device_type": 2 00:09:58.946 } 00:09:58.946 ], 00:09:58.946 "driver_specific": { 00:09:58.946 "raid": { 00:09:58.946 "uuid": "fdbd9d82-f686-413f-9f60-63ac87cba7de", 00:09:58.946 "strip_size_kb": 64, 00:09:58.946 "state": "online", 00:09:58.946 "raid_level": "raid0", 00:09:58.946 "superblock": true, 00:09:58.946 "num_base_bdevs": 4, 00:09:58.946 "num_base_bdevs_discovered": 4, 00:09:58.946 "num_base_bdevs_operational": 4, 00:09:58.946 "base_bdevs_list": [ 00:09:58.946 { 00:09:58.946 "name": "pt1", 00:09:58.946 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.946 "is_configured": true, 00:09:58.946 "data_offset": 2048, 00:09:58.946 "data_size": 63488 00:09:58.946 }, 00:09:58.946 { 00:09:58.946 "name": "pt2", 00:09:58.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.946 "is_configured": true, 00:09:58.946 "data_offset": 2048, 00:09:58.946 "data_size": 63488 00:09:58.946 }, 00:09:58.946 { 00:09:58.946 "name": "pt3", 00:09:58.946 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.946 "is_configured": true, 00:09:58.946 "data_offset": 2048, 00:09:58.946 "data_size": 63488 00:09:58.946 }, 00:09:58.946 { 00:09:58.946 "name": "pt4", 00:09:58.946 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:58.946 "is_configured": true, 00:09:58.946 "data_offset": 2048, 00:09:58.946 "data_size": 63488 00:09:58.946 } 00:09:58.946 ] 00:09:58.946 } 00:09:58.946 } 00:09:58.946 }' 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:58.946 pt2 00:09:58.946 pt3 00:09:58.946 pt4' 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.946 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.217 [2024-11-28 02:25:32.703799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fdbd9d82-f686-413f-9f60-63ac87cba7de '!=' fdbd9d82-f686-413f-9f60-63ac87cba7de ']' 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70513 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70513 ']' 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70513 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70513 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.217 killing process with pid 70513 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70513' 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70513 00:09:59.217 [2024-11-28 02:25:32.762807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.217 [2024-11-28 02:25:32.762946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.217 02:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70513 00:09:59.217 [2024-11-28 02:25:32.763034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.217 [2024-11-28 02:25:32.763045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:59.797 [2024-11-28 02:25:33.196240] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.736 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:00.736 00:10:00.736 real 0m5.679s 00:10:00.736 user 0m7.998s 00:10:00.736 sys 0m1.032s 00:10:00.736 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.736 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.736 ************************************ 00:10:00.736 END TEST raid_superblock_test 00:10:00.736 ************************************ 00:10:00.996 02:25:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:00.996 02:25:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:00.996 02:25:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.996 02:25:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.996 ************************************ 00:10:00.996 START TEST raid_read_error_test 00:10:00.996 ************************************ 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:00.996 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ig8YYhvUnj 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70778 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70778 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70778 ']' 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.997 02:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.997 [2024-11-28 02:25:34.580412] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:00.997 [2024-11-28 02:25:34.580542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70778 ] 00:10:01.257 [2024-11-28 02:25:34.756296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.257 [2024-11-28 02:25:34.893997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.528 [2024-11-28 02:25:35.132831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.528 [2024-11-28 02:25:35.132908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.788 BaseBdev1_malloc 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.788 true 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.788 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.788 [2024-11-28 02:25:35.463572] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:01.788 [2024-11-28 02:25:35.463646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.788 [2024-11-28 02:25:35.463670] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:01.788 [2024-11-28 02:25:35.463682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.788 [2024-11-28 02:25:35.466204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.788 [2024-11-28 02:25:35.466240] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:02.048 BaseBdev1 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.048 BaseBdev2_malloc 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.048 true 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.048 [2024-11-28 02:25:35.537799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:02.048 [2024-11-28 02:25:35.537862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.048 [2024-11-28 02:25:35.537878] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:02.048 [2024-11-28 02:25:35.537890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.048 [2024-11-28 02:25:35.540294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.048 [2024-11-28 02:25:35.540330] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:02.048 BaseBdev2 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.048 BaseBdev3_malloc 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.048 true 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.048 [2024-11-28 02:25:35.622952] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:02.048 [2024-11-28 02:25:35.623018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.048 [2024-11-28 02:25:35.623041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:02.048 [2024-11-28 02:25:35.623053] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.048 [2024-11-28 02:25:35.625529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.048 [2024-11-28 02:25:35.625565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:02.048 BaseBdev3 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.048 BaseBdev4_malloc 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.048 true 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.048 [2024-11-28 02:25:35.695601] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:02.048 [2024-11-28 02:25:35.695660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.048 [2024-11-28 02:25:35.695678] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:02.048 [2024-11-28 02:25:35.695689] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.048 [2024-11-28 02:25:35.698083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.048 [2024-11-28 02:25:35.698149] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:02.048 BaseBdev4 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.048 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.048 [2024-11-28 02:25:35.707658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.049 [2024-11-28 02:25:35.709732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.049 [2024-11-28 02:25:35.709826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.049 [2024-11-28 02:25:35.709885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:02.049 [2024-11-28 02:25:35.710120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:02.049 [2024-11-28 02:25:35.710163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:02.049 [2024-11-28 02:25:35.710415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:02.049 [2024-11-28 02:25:35.710597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:02.049 [2024-11-28 02:25:35.710612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:02.049 [2024-11-28 02:25:35.710769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.049 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.309 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.309 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.309 "name": "raid_bdev1", 00:10:02.309 "uuid": "e594c63d-bbb8-4e98-9d11-b417e46e3dee", 00:10:02.309 "strip_size_kb": 64, 00:10:02.309 "state": "online", 00:10:02.309 "raid_level": "raid0", 00:10:02.309 "superblock": true, 00:10:02.309 "num_base_bdevs": 4, 00:10:02.309 "num_base_bdevs_discovered": 4, 00:10:02.309 "num_base_bdevs_operational": 4, 00:10:02.309 "base_bdevs_list": [ 00:10:02.309 { 00:10:02.309 "name": "BaseBdev1", 00:10:02.309 "uuid": "2bc95a73-d7ed-58d2-bf2c-ec756154d6e3", 00:10:02.309 "is_configured": true, 00:10:02.309 "data_offset": 2048, 00:10:02.309 "data_size": 63488 00:10:02.309 }, 00:10:02.309 { 00:10:02.309 "name": "BaseBdev2", 00:10:02.309 "uuid": "f1635ad7-9c08-59dd-91c6-c4ec4baad374", 00:10:02.309 "is_configured": true, 00:10:02.309 "data_offset": 2048, 00:10:02.309 "data_size": 63488 00:10:02.309 }, 00:10:02.309 { 00:10:02.309 "name": "BaseBdev3", 00:10:02.309 "uuid": "d1254fc6-b439-5d42-aba7-e2e8ca25944a", 00:10:02.309 "is_configured": true, 00:10:02.309 "data_offset": 2048, 00:10:02.309 "data_size": 63488 00:10:02.309 }, 00:10:02.309 { 00:10:02.309 "name": "BaseBdev4", 00:10:02.309 "uuid": "909219e0-557b-58bb-ac9b-623a13b4a38f", 00:10:02.309 "is_configured": true, 00:10:02.309 "data_offset": 2048, 00:10:02.309 "data_size": 63488 00:10:02.309 } 00:10:02.309 ] 00:10:02.309 }' 00:10:02.309 02:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.309 02:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.568 02:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:02.568 02:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:02.568 [2024-11-28 02:25:36.220233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.507 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.767 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.767 "name": "raid_bdev1", 00:10:03.767 "uuid": "e594c63d-bbb8-4e98-9d11-b417e46e3dee", 00:10:03.767 "strip_size_kb": 64, 00:10:03.767 "state": "online", 00:10:03.767 "raid_level": "raid0", 00:10:03.767 "superblock": true, 00:10:03.767 "num_base_bdevs": 4, 00:10:03.767 "num_base_bdevs_discovered": 4, 00:10:03.767 "num_base_bdevs_operational": 4, 00:10:03.767 "base_bdevs_list": [ 00:10:03.768 { 00:10:03.768 "name": "BaseBdev1", 00:10:03.768 "uuid": "2bc95a73-d7ed-58d2-bf2c-ec756154d6e3", 00:10:03.768 "is_configured": true, 00:10:03.768 "data_offset": 2048, 00:10:03.768 "data_size": 63488 00:10:03.768 }, 00:10:03.768 { 00:10:03.768 "name": "BaseBdev2", 00:10:03.768 "uuid": "f1635ad7-9c08-59dd-91c6-c4ec4baad374", 00:10:03.768 "is_configured": true, 00:10:03.768 "data_offset": 2048, 00:10:03.768 "data_size": 63488 00:10:03.768 }, 00:10:03.768 { 00:10:03.768 "name": "BaseBdev3", 00:10:03.768 "uuid": "d1254fc6-b439-5d42-aba7-e2e8ca25944a", 00:10:03.768 "is_configured": true, 00:10:03.768 "data_offset": 2048, 00:10:03.768 "data_size": 63488 00:10:03.768 }, 00:10:03.768 { 00:10:03.768 "name": "BaseBdev4", 00:10:03.768 "uuid": "909219e0-557b-58bb-ac9b-623a13b4a38f", 00:10:03.768 "is_configured": true, 00:10:03.768 "data_offset": 2048, 00:10:03.768 "data_size": 63488 00:10:03.768 } 00:10:03.768 ] 00:10:03.768 }' 00:10:03.768 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.768 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.028 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.029 [2024-11-28 02:25:37.573405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.029 [2024-11-28 02:25:37.573455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.029 [2024-11-28 02:25:37.576156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.029 [2024-11-28 02:25:37.576224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.029 [2024-11-28 02:25:37.576273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.029 [2024-11-28 02:25:37.576285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:04.029 { 00:10:04.029 "results": [ 00:10:04.029 { 00:10:04.029 "job": "raid_bdev1", 00:10:04.029 "core_mask": "0x1", 00:10:04.029 "workload": "randrw", 00:10:04.029 "percentage": 50, 00:10:04.029 "status": "finished", 00:10:04.029 "queue_depth": 1, 00:10:04.029 "io_size": 131072, 00:10:04.029 "runtime": 1.353765, 00:10:04.029 "iops": 13425.520677517885, 00:10:04.029 "mibps": 1678.1900846897356, 00:10:04.029 "io_failed": 1, 00:10:04.029 "io_timeout": 0, 00:10:04.029 "avg_latency_us": 104.76700289070668, 00:10:04.029 "min_latency_us": 24.817467248908297, 00:10:04.029 "max_latency_us": 1395.1441048034935 00:10:04.029 } 00:10:04.029 ], 00:10:04.029 "core_count": 1 00:10:04.029 } 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70778 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70778 ']' 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70778 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70778 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.029 killing process with pid 70778 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70778' 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70778 00:10:04.029 [2024-11-28 02:25:37.623987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.029 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70778 00:10:04.597 [2024-11-28 02:25:37.974932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:05.979 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ig8YYhvUnj 00:10:05.979 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:05.979 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:05.979 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:05.979 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:05.979 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.979 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:05.980 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:05.980 00:10:05.980 real 0m4.801s 00:10:05.980 user 0m5.489s 00:10:05.980 sys 0m0.677s 00:10:05.980 02:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.980 02:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.980 ************************************ 00:10:05.980 END TEST raid_read_error_test 00:10:05.980 ************************************ 00:10:05.980 02:25:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:05.980 02:25:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:05.980 02:25:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.980 02:25:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:05.980 ************************************ 00:10:05.980 START TEST raid_write_error_test 00:10:05.980 ************************************ 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CssDHl1Dgi 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70928 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70928 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70928 ']' 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.980 02:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.980 [2024-11-28 02:25:39.464885] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:05.980 [2024-11-28 02:25:39.465039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70928 ] 00:10:05.980 [2024-11-28 02:25:39.640815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.240 [2024-11-28 02:25:39.782855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.499 [2024-11-28 02:25:40.014740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.499 [2024-11-28 02:25:40.014791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 BaseBdev1_malloc 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 true 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 [2024-11-28 02:25:40.340185] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:06.760 [2024-11-28 02:25:40.340245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.760 [2024-11-28 02:25:40.340266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:06.760 [2024-11-28 02:25:40.340277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.760 [2024-11-28 02:25:40.342631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.760 [2024-11-28 02:25:40.342694] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:06.760 BaseBdev1 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 BaseBdev2_malloc 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 true 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 [2024-11-28 02:25:40.413106] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:06.760 [2024-11-28 02:25:40.413175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.760 [2024-11-28 02:25:40.413191] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:06.760 [2024-11-28 02:25:40.413202] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.760 [2024-11-28 02:25:40.415590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.760 [2024-11-28 02:25:40.415624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:06.760 BaseBdev2 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.760 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.019 BaseBdev3_malloc 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.019 true 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.019 [2024-11-28 02:25:40.499323] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:07.019 [2024-11-28 02:25:40.499379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.019 [2024-11-28 02:25:40.499397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:07.019 [2024-11-28 02:25:40.499408] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.019 [2024-11-28 02:25:40.501809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.019 [2024-11-28 02:25:40.501846] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:07.019 BaseBdev3 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.019 BaseBdev4_malloc 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.019 true 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.019 [2024-11-28 02:25:40.573073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:07.019 [2024-11-28 02:25:40.573134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.019 [2024-11-28 02:25:40.573152] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:07.019 [2024-11-28 02:25:40.573163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.019 [2024-11-28 02:25:40.575489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.019 [2024-11-28 02:25:40.575526] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:07.019 BaseBdev4 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.019 [2024-11-28 02:25:40.585133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.019 [2024-11-28 02:25:40.587230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.019 [2024-11-28 02:25:40.587309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.019 [2024-11-28 02:25:40.587369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:07.019 [2024-11-28 02:25:40.587607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:07.019 [2024-11-28 02:25:40.587630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:07.019 [2024-11-28 02:25:40.587879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:07.019 [2024-11-28 02:25:40.588065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:07.019 [2024-11-28 02:25:40.588081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:07.019 [2024-11-28 02:25:40.588225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.019 "name": "raid_bdev1", 00:10:07.019 "uuid": "3c22abf4-9dfe-461f-a217-299ea1a535ac", 00:10:07.019 "strip_size_kb": 64, 00:10:07.019 "state": "online", 00:10:07.019 "raid_level": "raid0", 00:10:07.019 "superblock": true, 00:10:07.019 "num_base_bdevs": 4, 00:10:07.019 "num_base_bdevs_discovered": 4, 00:10:07.019 "num_base_bdevs_operational": 4, 00:10:07.019 "base_bdevs_list": [ 00:10:07.019 { 00:10:07.019 "name": "BaseBdev1", 00:10:07.019 "uuid": "09222ff0-d411-5e37-8daf-8d900f9c0af9", 00:10:07.019 "is_configured": true, 00:10:07.019 "data_offset": 2048, 00:10:07.019 "data_size": 63488 00:10:07.019 }, 00:10:07.019 { 00:10:07.019 "name": "BaseBdev2", 00:10:07.019 "uuid": "67ec252e-6391-50b5-94e4-cd3c88750379", 00:10:07.019 "is_configured": true, 00:10:07.019 "data_offset": 2048, 00:10:07.019 "data_size": 63488 00:10:07.019 }, 00:10:07.019 { 00:10:07.019 "name": "BaseBdev3", 00:10:07.019 "uuid": "52bc7cbe-398d-5f51-b17e-7a61e68b7557", 00:10:07.019 "is_configured": true, 00:10:07.019 "data_offset": 2048, 00:10:07.019 "data_size": 63488 00:10:07.019 }, 00:10:07.019 { 00:10:07.019 "name": "BaseBdev4", 00:10:07.019 "uuid": "1be9e518-08df-5a60-8352-c3ad128beea7", 00:10:07.019 "is_configured": true, 00:10:07.019 "data_offset": 2048, 00:10:07.019 "data_size": 63488 00:10:07.019 } 00:10:07.019 ] 00:10:07.019 }' 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.019 02:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.587 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:07.587 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:07.587 [2024-11-28 02:25:41.097614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.526 "name": "raid_bdev1", 00:10:08.526 "uuid": "3c22abf4-9dfe-461f-a217-299ea1a535ac", 00:10:08.526 "strip_size_kb": 64, 00:10:08.526 "state": "online", 00:10:08.526 "raid_level": "raid0", 00:10:08.526 "superblock": true, 00:10:08.526 "num_base_bdevs": 4, 00:10:08.526 "num_base_bdevs_discovered": 4, 00:10:08.526 "num_base_bdevs_operational": 4, 00:10:08.526 "base_bdevs_list": [ 00:10:08.526 { 00:10:08.526 "name": "BaseBdev1", 00:10:08.526 "uuid": "09222ff0-d411-5e37-8daf-8d900f9c0af9", 00:10:08.526 "is_configured": true, 00:10:08.526 "data_offset": 2048, 00:10:08.526 "data_size": 63488 00:10:08.526 }, 00:10:08.526 { 00:10:08.526 "name": "BaseBdev2", 00:10:08.526 "uuid": "67ec252e-6391-50b5-94e4-cd3c88750379", 00:10:08.526 "is_configured": true, 00:10:08.526 "data_offset": 2048, 00:10:08.526 "data_size": 63488 00:10:08.526 }, 00:10:08.526 { 00:10:08.526 "name": "BaseBdev3", 00:10:08.526 "uuid": "52bc7cbe-398d-5f51-b17e-7a61e68b7557", 00:10:08.526 "is_configured": true, 00:10:08.526 "data_offset": 2048, 00:10:08.526 "data_size": 63488 00:10:08.526 }, 00:10:08.526 { 00:10:08.526 "name": "BaseBdev4", 00:10:08.526 "uuid": "1be9e518-08df-5a60-8352-c3ad128beea7", 00:10:08.526 "is_configured": true, 00:10:08.526 "data_offset": 2048, 00:10:08.526 "data_size": 63488 00:10:08.526 } 00:10:08.526 ] 00:10:08.526 }' 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.526 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.803 [2024-11-28 02:25:42.398558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:08.803 [2024-11-28 02:25:42.398609] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.803 [2024-11-28 02:25:42.401335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.803 [2024-11-28 02:25:42.401424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.803 [2024-11-28 02:25:42.401475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.803 [2024-11-28 02:25:42.401489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:08.803 { 00:10:08.803 "results": [ 00:10:08.803 { 00:10:08.803 "job": "raid_bdev1", 00:10:08.803 "core_mask": "0x1", 00:10:08.803 "workload": "randrw", 00:10:08.803 "percentage": 50, 00:10:08.803 "status": "finished", 00:10:08.803 "queue_depth": 1, 00:10:08.803 "io_size": 131072, 00:10:08.803 "runtime": 1.301421, 00:10:08.803 "iops": 13349.254391930051, 00:10:08.803 "mibps": 1668.6567989912564, 00:10:08.803 "io_failed": 1, 00:10:08.803 "io_timeout": 0, 00:10:08.803 "avg_latency_us": 105.41985590072602, 00:10:08.803 "min_latency_us": 26.382532751091702, 00:10:08.803 "max_latency_us": 1345.0620087336245 00:10:08.803 } 00:10:08.803 ], 00:10:08.803 "core_count": 1 00:10:08.803 } 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70928 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70928 ']' 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70928 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70928 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70928' 00:10:08.803 killing process with pid 70928 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70928 00:10:08.803 [2024-11-28 02:25:42.446254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.803 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70928 00:10:09.401 [2024-11-28 02:25:42.812359] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.783 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CssDHl1Dgi 00:10:10.783 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:10.783 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:10.783 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:10:10.783 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:10.783 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:10.783 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:10.783 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:10:10.783 00:10:10.783 real 0m4.696s 00:10:10.783 user 0m5.301s 00:10:10.783 sys 0m0.711s 00:10:10.783 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.783 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.783 ************************************ 00:10:10.783 END TEST raid_write_error_test 00:10:10.783 ************************************ 00:10:10.783 02:25:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:10.783 02:25:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:10.783 02:25:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:10.783 02:25:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.783 02:25:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.783 ************************************ 00:10:10.783 START TEST raid_state_function_test 00:10:10.783 ************************************ 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71067 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:10.783 Process raid pid: 71067 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71067' 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71067 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71067 ']' 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.783 02:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.783 [2024-11-28 02:25:44.217076] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:10.783 [2024-11-28 02:25:44.217191] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.783 [2024-11-28 02:25:44.395373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.043 [2024-11-28 02:25:44.505474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.043 [2024-11-28 02:25:44.706441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.043 [2024-11-28 02:25:44.706497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.619 [2024-11-28 02:25:45.040106] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.619 [2024-11-28 02:25:45.040167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.619 [2024-11-28 02:25:45.040178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.619 [2024-11-28 02:25:45.040190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.619 [2024-11-28 02:25:45.040198] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.619 [2024-11-28 02:25:45.040209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.619 [2024-11-28 02:25:45.040217] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.619 [2024-11-28 02:25:45.040227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.619 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.620 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.620 "name": "Existed_Raid", 00:10:11.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.620 "strip_size_kb": 64, 00:10:11.620 "state": "configuring", 00:10:11.620 "raid_level": "concat", 00:10:11.620 "superblock": false, 00:10:11.620 "num_base_bdevs": 4, 00:10:11.620 "num_base_bdevs_discovered": 0, 00:10:11.620 "num_base_bdevs_operational": 4, 00:10:11.620 "base_bdevs_list": [ 00:10:11.620 { 00:10:11.620 "name": "BaseBdev1", 00:10:11.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.620 "is_configured": false, 00:10:11.620 "data_offset": 0, 00:10:11.620 "data_size": 0 00:10:11.620 }, 00:10:11.620 { 00:10:11.620 "name": "BaseBdev2", 00:10:11.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.620 "is_configured": false, 00:10:11.620 "data_offset": 0, 00:10:11.620 "data_size": 0 00:10:11.620 }, 00:10:11.620 { 00:10:11.620 "name": "BaseBdev3", 00:10:11.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.620 "is_configured": false, 00:10:11.620 "data_offset": 0, 00:10:11.620 "data_size": 0 00:10:11.620 }, 00:10:11.620 { 00:10:11.620 "name": "BaseBdev4", 00:10:11.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.620 "is_configured": false, 00:10:11.620 "data_offset": 0, 00:10:11.620 "data_size": 0 00:10:11.620 } 00:10:11.620 ] 00:10:11.620 }' 00:10:11.620 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.620 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.883 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.884 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.884 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.884 [2024-11-28 02:25:45.507274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.884 [2024-11-28 02:25:45.507320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:11.884 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.884 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.884 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.884 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.884 [2024-11-28 02:25:45.519249] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.884 [2024-11-28 02:25:45.519297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.884 [2024-11-28 02:25:45.519308] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.884 [2024-11-28 02:25:45.519321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.884 [2024-11-28 02:25:45.519329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.884 [2024-11-28 02:25:45.519341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.884 [2024-11-28 02:25:45.519349] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.884 [2024-11-28 02:25:45.519361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.884 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.884 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.884 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.884 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.143 [2024-11-28 02:25:45.566309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.143 BaseBdev1 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.143 [ 00:10:12.143 { 00:10:12.143 "name": "BaseBdev1", 00:10:12.143 "aliases": [ 00:10:12.143 "cc9e0af0-5689-4104-9f9e-bea499fa9ee5" 00:10:12.143 ], 00:10:12.143 "product_name": "Malloc disk", 00:10:12.143 "block_size": 512, 00:10:12.143 "num_blocks": 65536, 00:10:12.143 "uuid": "cc9e0af0-5689-4104-9f9e-bea499fa9ee5", 00:10:12.143 "assigned_rate_limits": { 00:10:12.143 "rw_ios_per_sec": 0, 00:10:12.143 "rw_mbytes_per_sec": 0, 00:10:12.143 "r_mbytes_per_sec": 0, 00:10:12.143 "w_mbytes_per_sec": 0 00:10:12.143 }, 00:10:12.143 "claimed": true, 00:10:12.143 "claim_type": "exclusive_write", 00:10:12.143 "zoned": false, 00:10:12.143 "supported_io_types": { 00:10:12.143 "read": true, 00:10:12.143 "write": true, 00:10:12.143 "unmap": true, 00:10:12.143 "flush": true, 00:10:12.143 "reset": true, 00:10:12.143 "nvme_admin": false, 00:10:12.143 "nvme_io": false, 00:10:12.143 "nvme_io_md": false, 00:10:12.143 "write_zeroes": true, 00:10:12.143 "zcopy": true, 00:10:12.143 "get_zone_info": false, 00:10:12.143 "zone_management": false, 00:10:12.143 "zone_append": false, 00:10:12.143 "compare": false, 00:10:12.143 "compare_and_write": false, 00:10:12.143 "abort": true, 00:10:12.143 "seek_hole": false, 00:10:12.143 "seek_data": false, 00:10:12.143 "copy": true, 00:10:12.143 "nvme_iov_md": false 00:10:12.143 }, 00:10:12.143 "memory_domains": [ 00:10:12.143 { 00:10:12.143 "dma_device_id": "system", 00:10:12.143 "dma_device_type": 1 00:10:12.143 }, 00:10:12.143 { 00:10:12.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.143 "dma_device_type": 2 00:10:12.143 } 00:10:12.143 ], 00:10:12.143 "driver_specific": {} 00:10:12.143 } 00:10:12.143 ] 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.143 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.144 "name": "Existed_Raid", 00:10:12.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.144 "strip_size_kb": 64, 00:10:12.144 "state": "configuring", 00:10:12.144 "raid_level": "concat", 00:10:12.144 "superblock": false, 00:10:12.144 "num_base_bdevs": 4, 00:10:12.144 "num_base_bdevs_discovered": 1, 00:10:12.144 "num_base_bdevs_operational": 4, 00:10:12.144 "base_bdevs_list": [ 00:10:12.144 { 00:10:12.144 "name": "BaseBdev1", 00:10:12.144 "uuid": "cc9e0af0-5689-4104-9f9e-bea499fa9ee5", 00:10:12.144 "is_configured": true, 00:10:12.144 "data_offset": 0, 00:10:12.144 "data_size": 65536 00:10:12.144 }, 00:10:12.144 { 00:10:12.144 "name": "BaseBdev2", 00:10:12.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.144 "is_configured": false, 00:10:12.144 "data_offset": 0, 00:10:12.144 "data_size": 0 00:10:12.144 }, 00:10:12.144 { 00:10:12.144 "name": "BaseBdev3", 00:10:12.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.144 "is_configured": false, 00:10:12.144 "data_offset": 0, 00:10:12.144 "data_size": 0 00:10:12.144 }, 00:10:12.144 { 00:10:12.144 "name": "BaseBdev4", 00:10:12.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.144 "is_configured": false, 00:10:12.144 "data_offset": 0, 00:10:12.144 "data_size": 0 00:10:12.144 } 00:10:12.144 ] 00:10:12.144 }' 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.144 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.402 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.402 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.402 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.402 [2024-11-28 02:25:46.001708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.402 [2024-11-28 02:25:46.001774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.402 [2024-11-28 02:25:46.009764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.402 [2024-11-28 02:25:46.011618] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.402 [2024-11-28 02:25:46.011670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.402 [2024-11-28 02:25:46.011681] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.402 [2024-11-28 02:25:46.011695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.402 [2024-11-28 02:25:46.011704] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.402 [2024-11-28 02:25:46.011714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.402 "name": "Existed_Raid", 00:10:12.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.402 "strip_size_kb": 64, 00:10:12.402 "state": "configuring", 00:10:12.402 "raid_level": "concat", 00:10:12.402 "superblock": false, 00:10:12.402 "num_base_bdevs": 4, 00:10:12.402 "num_base_bdevs_discovered": 1, 00:10:12.402 "num_base_bdevs_operational": 4, 00:10:12.402 "base_bdevs_list": [ 00:10:12.402 { 00:10:12.402 "name": "BaseBdev1", 00:10:12.402 "uuid": "cc9e0af0-5689-4104-9f9e-bea499fa9ee5", 00:10:12.402 "is_configured": true, 00:10:12.402 "data_offset": 0, 00:10:12.402 "data_size": 65536 00:10:12.402 }, 00:10:12.402 { 00:10:12.402 "name": "BaseBdev2", 00:10:12.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.402 "is_configured": false, 00:10:12.402 "data_offset": 0, 00:10:12.402 "data_size": 0 00:10:12.402 }, 00:10:12.402 { 00:10:12.402 "name": "BaseBdev3", 00:10:12.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.402 "is_configured": false, 00:10:12.402 "data_offset": 0, 00:10:12.402 "data_size": 0 00:10:12.402 }, 00:10:12.402 { 00:10:12.402 "name": "BaseBdev4", 00:10:12.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.402 "is_configured": false, 00:10:12.402 "data_offset": 0, 00:10:12.402 "data_size": 0 00:10:12.402 } 00:10:12.402 ] 00:10:12.402 }' 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.402 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.969 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.969 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.969 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.970 [2024-11-28 02:25:46.450005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.970 BaseBdev2 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.970 [ 00:10:12.970 { 00:10:12.970 "name": "BaseBdev2", 00:10:12.970 "aliases": [ 00:10:12.970 "83ffeee3-b346-4cac-b9ba-c5325be886fd" 00:10:12.970 ], 00:10:12.970 "product_name": "Malloc disk", 00:10:12.970 "block_size": 512, 00:10:12.970 "num_blocks": 65536, 00:10:12.970 "uuid": "83ffeee3-b346-4cac-b9ba-c5325be886fd", 00:10:12.970 "assigned_rate_limits": { 00:10:12.970 "rw_ios_per_sec": 0, 00:10:12.970 "rw_mbytes_per_sec": 0, 00:10:12.970 "r_mbytes_per_sec": 0, 00:10:12.970 "w_mbytes_per_sec": 0 00:10:12.970 }, 00:10:12.970 "claimed": true, 00:10:12.970 "claim_type": "exclusive_write", 00:10:12.970 "zoned": false, 00:10:12.970 "supported_io_types": { 00:10:12.970 "read": true, 00:10:12.970 "write": true, 00:10:12.970 "unmap": true, 00:10:12.970 "flush": true, 00:10:12.970 "reset": true, 00:10:12.970 "nvme_admin": false, 00:10:12.970 "nvme_io": false, 00:10:12.970 "nvme_io_md": false, 00:10:12.970 "write_zeroes": true, 00:10:12.970 "zcopy": true, 00:10:12.970 "get_zone_info": false, 00:10:12.970 "zone_management": false, 00:10:12.970 "zone_append": false, 00:10:12.970 "compare": false, 00:10:12.970 "compare_and_write": false, 00:10:12.970 "abort": true, 00:10:12.970 "seek_hole": false, 00:10:12.970 "seek_data": false, 00:10:12.970 "copy": true, 00:10:12.970 "nvme_iov_md": false 00:10:12.970 }, 00:10:12.970 "memory_domains": [ 00:10:12.970 { 00:10:12.970 "dma_device_id": "system", 00:10:12.970 "dma_device_type": 1 00:10:12.970 }, 00:10:12.970 { 00:10:12.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.970 "dma_device_type": 2 00:10:12.970 } 00:10:12.970 ], 00:10:12.970 "driver_specific": {} 00:10:12.970 } 00:10:12.970 ] 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.970 "name": "Existed_Raid", 00:10:12.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.970 "strip_size_kb": 64, 00:10:12.970 "state": "configuring", 00:10:12.970 "raid_level": "concat", 00:10:12.970 "superblock": false, 00:10:12.970 "num_base_bdevs": 4, 00:10:12.970 "num_base_bdevs_discovered": 2, 00:10:12.970 "num_base_bdevs_operational": 4, 00:10:12.970 "base_bdevs_list": [ 00:10:12.970 { 00:10:12.970 "name": "BaseBdev1", 00:10:12.970 "uuid": "cc9e0af0-5689-4104-9f9e-bea499fa9ee5", 00:10:12.970 "is_configured": true, 00:10:12.970 "data_offset": 0, 00:10:12.970 "data_size": 65536 00:10:12.970 }, 00:10:12.970 { 00:10:12.970 "name": "BaseBdev2", 00:10:12.970 "uuid": "83ffeee3-b346-4cac-b9ba-c5325be886fd", 00:10:12.970 "is_configured": true, 00:10:12.970 "data_offset": 0, 00:10:12.970 "data_size": 65536 00:10:12.970 }, 00:10:12.970 { 00:10:12.970 "name": "BaseBdev3", 00:10:12.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.970 "is_configured": false, 00:10:12.970 "data_offset": 0, 00:10:12.970 "data_size": 0 00:10:12.970 }, 00:10:12.970 { 00:10:12.970 "name": "BaseBdev4", 00:10:12.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.970 "is_configured": false, 00:10:12.970 "data_offset": 0, 00:10:12.970 "data_size": 0 00:10:12.970 } 00:10:12.970 ] 00:10:12.970 }' 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.970 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.538 [2024-11-28 02:25:46.969976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.538 BaseBdev3 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.538 [ 00:10:13.538 { 00:10:13.538 "name": "BaseBdev3", 00:10:13.538 "aliases": [ 00:10:13.538 "76b7cbb3-fd6d-4943-b7f7-1659318650b2" 00:10:13.538 ], 00:10:13.538 "product_name": "Malloc disk", 00:10:13.538 "block_size": 512, 00:10:13.538 "num_blocks": 65536, 00:10:13.538 "uuid": "76b7cbb3-fd6d-4943-b7f7-1659318650b2", 00:10:13.538 "assigned_rate_limits": { 00:10:13.538 "rw_ios_per_sec": 0, 00:10:13.538 "rw_mbytes_per_sec": 0, 00:10:13.538 "r_mbytes_per_sec": 0, 00:10:13.538 "w_mbytes_per_sec": 0 00:10:13.538 }, 00:10:13.538 "claimed": true, 00:10:13.538 "claim_type": "exclusive_write", 00:10:13.538 "zoned": false, 00:10:13.538 "supported_io_types": { 00:10:13.538 "read": true, 00:10:13.538 "write": true, 00:10:13.538 "unmap": true, 00:10:13.538 "flush": true, 00:10:13.538 "reset": true, 00:10:13.538 "nvme_admin": false, 00:10:13.538 "nvme_io": false, 00:10:13.538 "nvme_io_md": false, 00:10:13.538 "write_zeroes": true, 00:10:13.538 "zcopy": true, 00:10:13.538 "get_zone_info": false, 00:10:13.538 "zone_management": false, 00:10:13.538 "zone_append": false, 00:10:13.538 "compare": false, 00:10:13.538 "compare_and_write": false, 00:10:13.538 "abort": true, 00:10:13.538 "seek_hole": false, 00:10:13.538 "seek_data": false, 00:10:13.538 "copy": true, 00:10:13.538 "nvme_iov_md": false 00:10:13.538 }, 00:10:13.538 "memory_domains": [ 00:10:13.538 { 00:10:13.538 "dma_device_id": "system", 00:10:13.538 "dma_device_type": 1 00:10:13.538 }, 00:10:13.538 { 00:10:13.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.538 "dma_device_type": 2 00:10:13.538 } 00:10:13.538 ], 00:10:13.538 "driver_specific": {} 00:10:13.538 } 00:10:13.538 ] 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.538 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.538 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.538 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.538 "name": "Existed_Raid", 00:10:13.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.538 "strip_size_kb": 64, 00:10:13.538 "state": "configuring", 00:10:13.538 "raid_level": "concat", 00:10:13.538 "superblock": false, 00:10:13.538 "num_base_bdevs": 4, 00:10:13.538 "num_base_bdevs_discovered": 3, 00:10:13.538 "num_base_bdevs_operational": 4, 00:10:13.538 "base_bdevs_list": [ 00:10:13.538 { 00:10:13.538 "name": "BaseBdev1", 00:10:13.538 "uuid": "cc9e0af0-5689-4104-9f9e-bea499fa9ee5", 00:10:13.538 "is_configured": true, 00:10:13.538 "data_offset": 0, 00:10:13.538 "data_size": 65536 00:10:13.538 }, 00:10:13.538 { 00:10:13.538 "name": "BaseBdev2", 00:10:13.538 "uuid": "83ffeee3-b346-4cac-b9ba-c5325be886fd", 00:10:13.538 "is_configured": true, 00:10:13.538 "data_offset": 0, 00:10:13.538 "data_size": 65536 00:10:13.538 }, 00:10:13.538 { 00:10:13.538 "name": "BaseBdev3", 00:10:13.538 "uuid": "76b7cbb3-fd6d-4943-b7f7-1659318650b2", 00:10:13.538 "is_configured": true, 00:10:13.538 "data_offset": 0, 00:10:13.538 "data_size": 65536 00:10:13.538 }, 00:10:13.538 { 00:10:13.538 "name": "BaseBdev4", 00:10:13.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.538 "is_configured": false, 00:10:13.538 "data_offset": 0, 00:10:13.538 "data_size": 0 00:10:13.538 } 00:10:13.538 ] 00:10:13.538 }' 00:10:13.538 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.539 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.798 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:13.798 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.798 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.799 [2024-11-28 02:25:47.452056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:13.799 [2024-11-28 02:25:47.452106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.799 [2024-11-28 02:25:47.452115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:13.799 [2024-11-28 02:25:47.452406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:13.799 [2024-11-28 02:25:47.452583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.799 [2024-11-28 02:25:47.452604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:13.799 [2024-11-28 02:25:47.452869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.799 BaseBdev4 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.799 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.059 [ 00:10:14.059 { 00:10:14.059 "name": "BaseBdev4", 00:10:14.059 "aliases": [ 00:10:14.059 "03b9c7c0-8627-4755-8cc6-cde512baf1fc" 00:10:14.059 ], 00:10:14.059 "product_name": "Malloc disk", 00:10:14.059 "block_size": 512, 00:10:14.059 "num_blocks": 65536, 00:10:14.059 "uuid": "03b9c7c0-8627-4755-8cc6-cde512baf1fc", 00:10:14.059 "assigned_rate_limits": { 00:10:14.059 "rw_ios_per_sec": 0, 00:10:14.059 "rw_mbytes_per_sec": 0, 00:10:14.059 "r_mbytes_per_sec": 0, 00:10:14.059 "w_mbytes_per_sec": 0 00:10:14.059 }, 00:10:14.059 "claimed": true, 00:10:14.059 "claim_type": "exclusive_write", 00:10:14.059 "zoned": false, 00:10:14.059 "supported_io_types": { 00:10:14.059 "read": true, 00:10:14.059 "write": true, 00:10:14.059 "unmap": true, 00:10:14.059 "flush": true, 00:10:14.059 "reset": true, 00:10:14.059 "nvme_admin": false, 00:10:14.059 "nvme_io": false, 00:10:14.059 "nvme_io_md": false, 00:10:14.059 "write_zeroes": true, 00:10:14.059 "zcopy": true, 00:10:14.059 "get_zone_info": false, 00:10:14.059 "zone_management": false, 00:10:14.059 "zone_append": false, 00:10:14.059 "compare": false, 00:10:14.059 "compare_and_write": false, 00:10:14.059 "abort": true, 00:10:14.059 "seek_hole": false, 00:10:14.059 "seek_data": false, 00:10:14.059 "copy": true, 00:10:14.059 "nvme_iov_md": false 00:10:14.059 }, 00:10:14.059 "memory_domains": [ 00:10:14.059 { 00:10:14.059 "dma_device_id": "system", 00:10:14.059 "dma_device_type": 1 00:10:14.059 }, 00:10:14.059 { 00:10:14.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.059 "dma_device_type": 2 00:10:14.059 } 00:10:14.059 ], 00:10:14.059 "driver_specific": {} 00:10:14.059 } 00:10:14.059 ] 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.059 "name": "Existed_Raid", 00:10:14.059 "uuid": "a464ce8a-6351-4010-aa29-7ff914076883", 00:10:14.059 "strip_size_kb": 64, 00:10:14.059 "state": "online", 00:10:14.059 "raid_level": "concat", 00:10:14.059 "superblock": false, 00:10:14.059 "num_base_bdevs": 4, 00:10:14.059 "num_base_bdevs_discovered": 4, 00:10:14.059 "num_base_bdevs_operational": 4, 00:10:14.059 "base_bdevs_list": [ 00:10:14.059 { 00:10:14.059 "name": "BaseBdev1", 00:10:14.059 "uuid": "cc9e0af0-5689-4104-9f9e-bea499fa9ee5", 00:10:14.059 "is_configured": true, 00:10:14.059 "data_offset": 0, 00:10:14.059 "data_size": 65536 00:10:14.059 }, 00:10:14.059 { 00:10:14.059 "name": "BaseBdev2", 00:10:14.059 "uuid": "83ffeee3-b346-4cac-b9ba-c5325be886fd", 00:10:14.059 "is_configured": true, 00:10:14.059 "data_offset": 0, 00:10:14.059 "data_size": 65536 00:10:14.059 }, 00:10:14.059 { 00:10:14.059 "name": "BaseBdev3", 00:10:14.059 "uuid": "76b7cbb3-fd6d-4943-b7f7-1659318650b2", 00:10:14.059 "is_configured": true, 00:10:14.059 "data_offset": 0, 00:10:14.059 "data_size": 65536 00:10:14.059 }, 00:10:14.059 { 00:10:14.059 "name": "BaseBdev4", 00:10:14.059 "uuid": "03b9c7c0-8627-4755-8cc6-cde512baf1fc", 00:10:14.059 "is_configured": true, 00:10:14.059 "data_offset": 0, 00:10:14.059 "data_size": 65536 00:10:14.059 } 00:10:14.059 ] 00:10:14.059 }' 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.059 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.319 [2024-11-28 02:25:47.936075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.319 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.319 "name": "Existed_Raid", 00:10:14.319 "aliases": [ 00:10:14.319 "a464ce8a-6351-4010-aa29-7ff914076883" 00:10:14.319 ], 00:10:14.319 "product_name": "Raid Volume", 00:10:14.319 "block_size": 512, 00:10:14.319 "num_blocks": 262144, 00:10:14.319 "uuid": "a464ce8a-6351-4010-aa29-7ff914076883", 00:10:14.319 "assigned_rate_limits": { 00:10:14.319 "rw_ios_per_sec": 0, 00:10:14.319 "rw_mbytes_per_sec": 0, 00:10:14.319 "r_mbytes_per_sec": 0, 00:10:14.319 "w_mbytes_per_sec": 0 00:10:14.319 }, 00:10:14.319 "claimed": false, 00:10:14.319 "zoned": false, 00:10:14.319 "supported_io_types": { 00:10:14.319 "read": true, 00:10:14.319 "write": true, 00:10:14.319 "unmap": true, 00:10:14.319 "flush": true, 00:10:14.319 "reset": true, 00:10:14.319 "nvme_admin": false, 00:10:14.319 "nvme_io": false, 00:10:14.319 "nvme_io_md": false, 00:10:14.319 "write_zeroes": true, 00:10:14.319 "zcopy": false, 00:10:14.319 "get_zone_info": false, 00:10:14.319 "zone_management": false, 00:10:14.319 "zone_append": false, 00:10:14.319 "compare": false, 00:10:14.319 "compare_and_write": false, 00:10:14.319 "abort": false, 00:10:14.319 "seek_hole": false, 00:10:14.319 "seek_data": false, 00:10:14.319 "copy": false, 00:10:14.319 "nvme_iov_md": false 00:10:14.319 }, 00:10:14.319 "memory_domains": [ 00:10:14.319 { 00:10:14.319 "dma_device_id": "system", 00:10:14.319 "dma_device_type": 1 00:10:14.319 }, 00:10:14.319 { 00:10:14.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.319 "dma_device_type": 2 00:10:14.319 }, 00:10:14.319 { 00:10:14.319 "dma_device_id": "system", 00:10:14.319 "dma_device_type": 1 00:10:14.319 }, 00:10:14.319 { 00:10:14.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.319 "dma_device_type": 2 00:10:14.319 }, 00:10:14.319 { 00:10:14.319 "dma_device_id": "system", 00:10:14.319 "dma_device_type": 1 00:10:14.319 }, 00:10:14.319 { 00:10:14.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.319 "dma_device_type": 2 00:10:14.319 }, 00:10:14.319 { 00:10:14.319 "dma_device_id": "system", 00:10:14.319 "dma_device_type": 1 00:10:14.319 }, 00:10:14.319 { 00:10:14.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.319 "dma_device_type": 2 00:10:14.319 } 00:10:14.319 ], 00:10:14.319 "driver_specific": { 00:10:14.320 "raid": { 00:10:14.320 "uuid": "a464ce8a-6351-4010-aa29-7ff914076883", 00:10:14.320 "strip_size_kb": 64, 00:10:14.320 "state": "online", 00:10:14.320 "raid_level": "concat", 00:10:14.320 "superblock": false, 00:10:14.320 "num_base_bdevs": 4, 00:10:14.320 "num_base_bdevs_discovered": 4, 00:10:14.320 "num_base_bdevs_operational": 4, 00:10:14.320 "base_bdevs_list": [ 00:10:14.320 { 00:10:14.320 "name": "BaseBdev1", 00:10:14.320 "uuid": "cc9e0af0-5689-4104-9f9e-bea499fa9ee5", 00:10:14.320 "is_configured": true, 00:10:14.320 "data_offset": 0, 00:10:14.320 "data_size": 65536 00:10:14.320 }, 00:10:14.320 { 00:10:14.320 "name": "BaseBdev2", 00:10:14.320 "uuid": "83ffeee3-b346-4cac-b9ba-c5325be886fd", 00:10:14.320 "is_configured": true, 00:10:14.320 "data_offset": 0, 00:10:14.320 "data_size": 65536 00:10:14.320 }, 00:10:14.320 { 00:10:14.320 "name": "BaseBdev3", 00:10:14.320 "uuid": "76b7cbb3-fd6d-4943-b7f7-1659318650b2", 00:10:14.320 "is_configured": true, 00:10:14.320 "data_offset": 0, 00:10:14.320 "data_size": 65536 00:10:14.320 }, 00:10:14.320 { 00:10:14.320 "name": "BaseBdev4", 00:10:14.320 "uuid": "03b9c7c0-8627-4755-8cc6-cde512baf1fc", 00:10:14.320 "is_configured": true, 00:10:14.320 "data_offset": 0, 00:10:14.320 "data_size": 65536 00:10:14.320 } 00:10:14.320 ] 00:10:14.320 } 00:10:14.320 } 00:10:14.320 }' 00:10:14.320 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:14.580 BaseBdev2 00:10:14.580 BaseBdev3 00:10:14.580 BaseBdev4' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.580 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.580 [2024-11-28 02:25:48.239780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.580 [2024-11-28 02:25:48.239817] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.580 [2024-11-28 02:25:48.239873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.841 "name": "Existed_Raid", 00:10:14.841 "uuid": "a464ce8a-6351-4010-aa29-7ff914076883", 00:10:14.841 "strip_size_kb": 64, 00:10:14.841 "state": "offline", 00:10:14.841 "raid_level": "concat", 00:10:14.841 "superblock": false, 00:10:14.841 "num_base_bdevs": 4, 00:10:14.841 "num_base_bdevs_discovered": 3, 00:10:14.841 "num_base_bdevs_operational": 3, 00:10:14.841 "base_bdevs_list": [ 00:10:14.841 { 00:10:14.841 "name": null, 00:10:14.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.841 "is_configured": false, 00:10:14.841 "data_offset": 0, 00:10:14.841 "data_size": 65536 00:10:14.841 }, 00:10:14.841 { 00:10:14.841 "name": "BaseBdev2", 00:10:14.841 "uuid": "83ffeee3-b346-4cac-b9ba-c5325be886fd", 00:10:14.841 "is_configured": true, 00:10:14.841 "data_offset": 0, 00:10:14.841 "data_size": 65536 00:10:14.841 }, 00:10:14.841 { 00:10:14.841 "name": "BaseBdev3", 00:10:14.841 "uuid": "76b7cbb3-fd6d-4943-b7f7-1659318650b2", 00:10:14.841 "is_configured": true, 00:10:14.841 "data_offset": 0, 00:10:14.841 "data_size": 65536 00:10:14.841 }, 00:10:14.841 { 00:10:14.841 "name": "BaseBdev4", 00:10:14.841 "uuid": "03b9c7c0-8627-4755-8cc6-cde512baf1fc", 00:10:14.841 "is_configured": true, 00:10:14.841 "data_offset": 0, 00:10:14.841 "data_size": 65536 00:10:14.841 } 00:10:14.841 ] 00:10:14.841 }' 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.841 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.101 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:15.101 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.101 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.101 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.101 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.101 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 [2024-11-28 02:25:48.804242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.361 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.361 [2024-11-28 02:25:48.957891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.621 [2024-11-28 02:25:49.106357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:15.621 [2024-11-28 02:25:49.106412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.621 BaseBdev2 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.621 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.882 [ 00:10:15.882 { 00:10:15.882 "name": "BaseBdev2", 00:10:15.882 "aliases": [ 00:10:15.882 "0e6df190-fba3-4497-9771-b3a42b4f2ef7" 00:10:15.882 ], 00:10:15.882 "product_name": "Malloc disk", 00:10:15.882 "block_size": 512, 00:10:15.882 "num_blocks": 65536, 00:10:15.882 "uuid": "0e6df190-fba3-4497-9771-b3a42b4f2ef7", 00:10:15.882 "assigned_rate_limits": { 00:10:15.882 "rw_ios_per_sec": 0, 00:10:15.882 "rw_mbytes_per_sec": 0, 00:10:15.882 "r_mbytes_per_sec": 0, 00:10:15.882 "w_mbytes_per_sec": 0 00:10:15.882 }, 00:10:15.882 "claimed": false, 00:10:15.882 "zoned": false, 00:10:15.882 "supported_io_types": { 00:10:15.882 "read": true, 00:10:15.882 "write": true, 00:10:15.882 "unmap": true, 00:10:15.882 "flush": true, 00:10:15.882 "reset": true, 00:10:15.882 "nvme_admin": false, 00:10:15.882 "nvme_io": false, 00:10:15.882 "nvme_io_md": false, 00:10:15.882 "write_zeroes": true, 00:10:15.882 "zcopy": true, 00:10:15.882 "get_zone_info": false, 00:10:15.882 "zone_management": false, 00:10:15.882 "zone_append": false, 00:10:15.882 "compare": false, 00:10:15.882 "compare_and_write": false, 00:10:15.882 "abort": true, 00:10:15.882 "seek_hole": false, 00:10:15.882 "seek_data": false, 00:10:15.882 "copy": true, 00:10:15.882 "nvme_iov_md": false 00:10:15.882 }, 00:10:15.882 "memory_domains": [ 00:10:15.882 { 00:10:15.882 "dma_device_id": "system", 00:10:15.882 "dma_device_type": 1 00:10:15.882 }, 00:10:15.882 { 00:10:15.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.882 "dma_device_type": 2 00:10:15.882 } 00:10:15.882 ], 00:10:15.882 "driver_specific": {} 00:10:15.882 } 00:10:15.882 ] 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.882 BaseBdev3 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.882 [ 00:10:15.882 { 00:10:15.882 "name": "BaseBdev3", 00:10:15.882 "aliases": [ 00:10:15.882 "0e9f66d4-07a8-46ef-bcf1-0270547e4e53" 00:10:15.882 ], 00:10:15.882 "product_name": "Malloc disk", 00:10:15.882 "block_size": 512, 00:10:15.882 "num_blocks": 65536, 00:10:15.882 "uuid": "0e9f66d4-07a8-46ef-bcf1-0270547e4e53", 00:10:15.882 "assigned_rate_limits": { 00:10:15.882 "rw_ios_per_sec": 0, 00:10:15.882 "rw_mbytes_per_sec": 0, 00:10:15.882 "r_mbytes_per_sec": 0, 00:10:15.882 "w_mbytes_per_sec": 0 00:10:15.882 }, 00:10:15.882 "claimed": false, 00:10:15.882 "zoned": false, 00:10:15.882 "supported_io_types": { 00:10:15.882 "read": true, 00:10:15.882 "write": true, 00:10:15.882 "unmap": true, 00:10:15.882 "flush": true, 00:10:15.882 "reset": true, 00:10:15.882 "nvme_admin": false, 00:10:15.882 "nvme_io": false, 00:10:15.882 "nvme_io_md": false, 00:10:15.882 "write_zeroes": true, 00:10:15.882 "zcopy": true, 00:10:15.882 "get_zone_info": false, 00:10:15.882 "zone_management": false, 00:10:15.882 "zone_append": false, 00:10:15.882 "compare": false, 00:10:15.882 "compare_and_write": false, 00:10:15.882 "abort": true, 00:10:15.882 "seek_hole": false, 00:10:15.882 "seek_data": false, 00:10:15.882 "copy": true, 00:10:15.882 "nvme_iov_md": false 00:10:15.882 }, 00:10:15.882 "memory_domains": [ 00:10:15.882 { 00:10:15.882 "dma_device_id": "system", 00:10:15.882 "dma_device_type": 1 00:10:15.882 }, 00:10:15.882 { 00:10:15.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.882 "dma_device_type": 2 00:10:15.882 } 00:10:15.882 ], 00:10:15.882 "driver_specific": {} 00:10:15.882 } 00:10:15.882 ] 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.882 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.883 BaseBdev4 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.883 [ 00:10:15.883 { 00:10:15.883 "name": "BaseBdev4", 00:10:15.883 "aliases": [ 00:10:15.883 "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b" 00:10:15.883 ], 00:10:15.883 "product_name": "Malloc disk", 00:10:15.883 "block_size": 512, 00:10:15.883 "num_blocks": 65536, 00:10:15.883 "uuid": "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b", 00:10:15.883 "assigned_rate_limits": { 00:10:15.883 "rw_ios_per_sec": 0, 00:10:15.883 "rw_mbytes_per_sec": 0, 00:10:15.883 "r_mbytes_per_sec": 0, 00:10:15.883 "w_mbytes_per_sec": 0 00:10:15.883 }, 00:10:15.883 "claimed": false, 00:10:15.883 "zoned": false, 00:10:15.883 "supported_io_types": { 00:10:15.883 "read": true, 00:10:15.883 "write": true, 00:10:15.883 "unmap": true, 00:10:15.883 "flush": true, 00:10:15.883 "reset": true, 00:10:15.883 "nvme_admin": false, 00:10:15.883 "nvme_io": false, 00:10:15.883 "nvme_io_md": false, 00:10:15.883 "write_zeroes": true, 00:10:15.883 "zcopy": true, 00:10:15.883 "get_zone_info": false, 00:10:15.883 "zone_management": false, 00:10:15.883 "zone_append": false, 00:10:15.883 "compare": false, 00:10:15.883 "compare_and_write": false, 00:10:15.883 "abort": true, 00:10:15.883 "seek_hole": false, 00:10:15.883 "seek_data": false, 00:10:15.883 "copy": true, 00:10:15.883 "nvme_iov_md": false 00:10:15.883 }, 00:10:15.883 "memory_domains": [ 00:10:15.883 { 00:10:15.883 "dma_device_id": "system", 00:10:15.883 "dma_device_type": 1 00:10:15.883 }, 00:10:15.883 { 00:10:15.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.883 "dma_device_type": 2 00:10:15.883 } 00:10:15.883 ], 00:10:15.883 "driver_specific": {} 00:10:15.883 } 00:10:15.883 ] 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.883 [2024-11-28 02:25:49.482326] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.883 [2024-11-28 02:25:49.482426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.883 [2024-11-28 02:25:49.482479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.883 [2024-11-28 02:25:49.484267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.883 [2024-11-28 02:25:49.484375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.883 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.883 "name": "Existed_Raid", 00:10:15.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.883 "strip_size_kb": 64, 00:10:15.883 "state": "configuring", 00:10:15.883 "raid_level": "concat", 00:10:15.883 "superblock": false, 00:10:15.883 "num_base_bdevs": 4, 00:10:15.883 "num_base_bdevs_discovered": 3, 00:10:15.883 "num_base_bdevs_operational": 4, 00:10:15.883 "base_bdevs_list": [ 00:10:15.883 { 00:10:15.883 "name": "BaseBdev1", 00:10:15.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.883 "is_configured": false, 00:10:15.883 "data_offset": 0, 00:10:15.883 "data_size": 0 00:10:15.883 }, 00:10:15.883 { 00:10:15.883 "name": "BaseBdev2", 00:10:15.883 "uuid": "0e6df190-fba3-4497-9771-b3a42b4f2ef7", 00:10:15.883 "is_configured": true, 00:10:15.884 "data_offset": 0, 00:10:15.884 "data_size": 65536 00:10:15.884 }, 00:10:15.884 { 00:10:15.884 "name": "BaseBdev3", 00:10:15.884 "uuid": "0e9f66d4-07a8-46ef-bcf1-0270547e4e53", 00:10:15.884 "is_configured": true, 00:10:15.884 "data_offset": 0, 00:10:15.884 "data_size": 65536 00:10:15.884 }, 00:10:15.884 { 00:10:15.884 "name": "BaseBdev4", 00:10:15.884 "uuid": "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b", 00:10:15.884 "is_configured": true, 00:10:15.884 "data_offset": 0, 00:10:15.884 "data_size": 65536 00:10:15.884 } 00:10:15.884 ] 00:10:15.884 }' 00:10:15.884 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.884 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.465 [2024-11-28 02:25:49.869672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.465 "name": "Existed_Raid", 00:10:16.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.465 "strip_size_kb": 64, 00:10:16.465 "state": "configuring", 00:10:16.465 "raid_level": "concat", 00:10:16.465 "superblock": false, 00:10:16.465 "num_base_bdevs": 4, 00:10:16.465 "num_base_bdevs_discovered": 2, 00:10:16.465 "num_base_bdevs_operational": 4, 00:10:16.465 "base_bdevs_list": [ 00:10:16.465 { 00:10:16.465 "name": "BaseBdev1", 00:10:16.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.465 "is_configured": false, 00:10:16.465 "data_offset": 0, 00:10:16.465 "data_size": 0 00:10:16.465 }, 00:10:16.465 { 00:10:16.465 "name": null, 00:10:16.465 "uuid": "0e6df190-fba3-4497-9771-b3a42b4f2ef7", 00:10:16.465 "is_configured": false, 00:10:16.465 "data_offset": 0, 00:10:16.465 "data_size": 65536 00:10:16.465 }, 00:10:16.465 { 00:10:16.465 "name": "BaseBdev3", 00:10:16.465 "uuid": "0e9f66d4-07a8-46ef-bcf1-0270547e4e53", 00:10:16.465 "is_configured": true, 00:10:16.465 "data_offset": 0, 00:10:16.465 "data_size": 65536 00:10:16.465 }, 00:10:16.465 { 00:10:16.465 "name": "BaseBdev4", 00:10:16.465 "uuid": "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b", 00:10:16.465 "is_configured": true, 00:10:16.465 "data_offset": 0, 00:10:16.465 "data_size": 65536 00:10:16.465 } 00:10:16.465 ] 00:10:16.465 }' 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.465 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.742 [2024-11-28 02:25:50.393362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.742 BaseBdev1 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.742 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.743 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.743 [ 00:10:16.743 { 00:10:16.743 "name": "BaseBdev1", 00:10:16.743 "aliases": [ 00:10:16.743 "d1917ad1-622c-48ea-99f8-cc863fb4096a" 00:10:16.743 ], 00:10:17.002 "product_name": "Malloc disk", 00:10:17.002 "block_size": 512, 00:10:17.002 "num_blocks": 65536, 00:10:17.002 "uuid": "d1917ad1-622c-48ea-99f8-cc863fb4096a", 00:10:17.002 "assigned_rate_limits": { 00:10:17.002 "rw_ios_per_sec": 0, 00:10:17.002 "rw_mbytes_per_sec": 0, 00:10:17.002 "r_mbytes_per_sec": 0, 00:10:17.002 "w_mbytes_per_sec": 0 00:10:17.002 }, 00:10:17.002 "claimed": true, 00:10:17.002 "claim_type": "exclusive_write", 00:10:17.002 "zoned": false, 00:10:17.002 "supported_io_types": { 00:10:17.002 "read": true, 00:10:17.002 "write": true, 00:10:17.002 "unmap": true, 00:10:17.002 "flush": true, 00:10:17.002 "reset": true, 00:10:17.002 "nvme_admin": false, 00:10:17.002 "nvme_io": false, 00:10:17.002 "nvme_io_md": false, 00:10:17.002 "write_zeroes": true, 00:10:17.002 "zcopy": true, 00:10:17.002 "get_zone_info": false, 00:10:17.002 "zone_management": false, 00:10:17.002 "zone_append": false, 00:10:17.002 "compare": false, 00:10:17.002 "compare_and_write": false, 00:10:17.002 "abort": true, 00:10:17.002 "seek_hole": false, 00:10:17.002 "seek_data": false, 00:10:17.002 "copy": true, 00:10:17.002 "nvme_iov_md": false 00:10:17.002 }, 00:10:17.002 "memory_domains": [ 00:10:17.002 { 00:10:17.002 "dma_device_id": "system", 00:10:17.002 "dma_device_type": 1 00:10:17.002 }, 00:10:17.002 { 00:10:17.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.003 "dma_device_type": 2 00:10:17.003 } 00:10:17.003 ], 00:10:17.003 "driver_specific": {} 00:10:17.003 } 00:10:17.003 ] 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.003 "name": "Existed_Raid", 00:10:17.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.003 "strip_size_kb": 64, 00:10:17.003 "state": "configuring", 00:10:17.003 "raid_level": "concat", 00:10:17.003 "superblock": false, 00:10:17.003 "num_base_bdevs": 4, 00:10:17.003 "num_base_bdevs_discovered": 3, 00:10:17.003 "num_base_bdevs_operational": 4, 00:10:17.003 "base_bdevs_list": [ 00:10:17.003 { 00:10:17.003 "name": "BaseBdev1", 00:10:17.003 "uuid": "d1917ad1-622c-48ea-99f8-cc863fb4096a", 00:10:17.003 "is_configured": true, 00:10:17.003 "data_offset": 0, 00:10:17.003 "data_size": 65536 00:10:17.003 }, 00:10:17.003 { 00:10:17.003 "name": null, 00:10:17.003 "uuid": "0e6df190-fba3-4497-9771-b3a42b4f2ef7", 00:10:17.003 "is_configured": false, 00:10:17.003 "data_offset": 0, 00:10:17.003 "data_size": 65536 00:10:17.003 }, 00:10:17.003 { 00:10:17.003 "name": "BaseBdev3", 00:10:17.003 "uuid": "0e9f66d4-07a8-46ef-bcf1-0270547e4e53", 00:10:17.003 "is_configured": true, 00:10:17.003 "data_offset": 0, 00:10:17.003 "data_size": 65536 00:10:17.003 }, 00:10:17.003 { 00:10:17.003 "name": "BaseBdev4", 00:10:17.003 "uuid": "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b", 00:10:17.003 "is_configured": true, 00:10:17.003 "data_offset": 0, 00:10:17.003 "data_size": 65536 00:10:17.003 } 00:10:17.003 ] 00:10:17.003 }' 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.003 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.263 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.263 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.263 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.263 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:17.263 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.263 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:17.263 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.264 [2024-11-28 02:25:50.888809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.264 "name": "Existed_Raid", 00:10:17.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.264 "strip_size_kb": 64, 00:10:17.264 "state": "configuring", 00:10:17.264 "raid_level": "concat", 00:10:17.264 "superblock": false, 00:10:17.264 "num_base_bdevs": 4, 00:10:17.264 "num_base_bdevs_discovered": 2, 00:10:17.264 "num_base_bdevs_operational": 4, 00:10:17.264 "base_bdevs_list": [ 00:10:17.264 { 00:10:17.264 "name": "BaseBdev1", 00:10:17.264 "uuid": "d1917ad1-622c-48ea-99f8-cc863fb4096a", 00:10:17.264 "is_configured": true, 00:10:17.264 "data_offset": 0, 00:10:17.264 "data_size": 65536 00:10:17.264 }, 00:10:17.264 { 00:10:17.264 "name": null, 00:10:17.264 "uuid": "0e6df190-fba3-4497-9771-b3a42b4f2ef7", 00:10:17.264 "is_configured": false, 00:10:17.264 "data_offset": 0, 00:10:17.264 "data_size": 65536 00:10:17.264 }, 00:10:17.264 { 00:10:17.264 "name": null, 00:10:17.264 "uuid": "0e9f66d4-07a8-46ef-bcf1-0270547e4e53", 00:10:17.264 "is_configured": false, 00:10:17.264 "data_offset": 0, 00:10:17.264 "data_size": 65536 00:10:17.264 }, 00:10:17.264 { 00:10:17.264 "name": "BaseBdev4", 00:10:17.264 "uuid": "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b", 00:10:17.264 "is_configured": true, 00:10:17.264 "data_offset": 0, 00:10:17.264 "data_size": 65536 00:10:17.264 } 00:10:17.264 ] 00:10:17.264 }' 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.264 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.834 [2024-11-28 02:25:51.336075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.834 "name": "Existed_Raid", 00:10:17.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.834 "strip_size_kb": 64, 00:10:17.834 "state": "configuring", 00:10:17.834 "raid_level": "concat", 00:10:17.834 "superblock": false, 00:10:17.834 "num_base_bdevs": 4, 00:10:17.834 "num_base_bdevs_discovered": 3, 00:10:17.834 "num_base_bdevs_operational": 4, 00:10:17.834 "base_bdevs_list": [ 00:10:17.834 { 00:10:17.834 "name": "BaseBdev1", 00:10:17.834 "uuid": "d1917ad1-622c-48ea-99f8-cc863fb4096a", 00:10:17.834 "is_configured": true, 00:10:17.834 "data_offset": 0, 00:10:17.834 "data_size": 65536 00:10:17.834 }, 00:10:17.834 { 00:10:17.834 "name": null, 00:10:17.834 "uuid": "0e6df190-fba3-4497-9771-b3a42b4f2ef7", 00:10:17.834 "is_configured": false, 00:10:17.834 "data_offset": 0, 00:10:17.834 "data_size": 65536 00:10:17.834 }, 00:10:17.834 { 00:10:17.834 "name": "BaseBdev3", 00:10:17.834 "uuid": "0e9f66d4-07a8-46ef-bcf1-0270547e4e53", 00:10:17.834 "is_configured": true, 00:10:17.834 "data_offset": 0, 00:10:17.834 "data_size": 65536 00:10:17.834 }, 00:10:17.834 { 00:10:17.834 "name": "BaseBdev4", 00:10:17.834 "uuid": "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b", 00:10:17.834 "is_configured": true, 00:10:17.834 "data_offset": 0, 00:10:17.834 "data_size": 65536 00:10:17.834 } 00:10:17.834 ] 00:10:17.834 }' 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.834 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.094 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.094 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.094 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.094 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.094 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.354 [2024-11-28 02:25:51.787744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.354 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.354 "name": "Existed_Raid", 00:10:18.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.354 "strip_size_kb": 64, 00:10:18.354 "state": "configuring", 00:10:18.354 "raid_level": "concat", 00:10:18.354 "superblock": false, 00:10:18.354 "num_base_bdevs": 4, 00:10:18.354 "num_base_bdevs_discovered": 2, 00:10:18.354 "num_base_bdevs_operational": 4, 00:10:18.354 "base_bdevs_list": [ 00:10:18.354 { 00:10:18.354 "name": null, 00:10:18.354 "uuid": "d1917ad1-622c-48ea-99f8-cc863fb4096a", 00:10:18.354 "is_configured": false, 00:10:18.354 "data_offset": 0, 00:10:18.354 "data_size": 65536 00:10:18.354 }, 00:10:18.354 { 00:10:18.354 "name": null, 00:10:18.354 "uuid": "0e6df190-fba3-4497-9771-b3a42b4f2ef7", 00:10:18.354 "is_configured": false, 00:10:18.355 "data_offset": 0, 00:10:18.355 "data_size": 65536 00:10:18.355 }, 00:10:18.355 { 00:10:18.355 "name": "BaseBdev3", 00:10:18.355 "uuid": "0e9f66d4-07a8-46ef-bcf1-0270547e4e53", 00:10:18.355 "is_configured": true, 00:10:18.355 "data_offset": 0, 00:10:18.355 "data_size": 65536 00:10:18.355 }, 00:10:18.355 { 00:10:18.355 "name": "BaseBdev4", 00:10:18.355 "uuid": "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b", 00:10:18.355 "is_configured": true, 00:10:18.355 "data_offset": 0, 00:10:18.355 "data_size": 65536 00:10:18.355 } 00:10:18.355 ] 00:10:18.355 }' 00:10:18.355 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.355 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.615 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.615 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.615 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.615 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.875 [2024-11-28 02:25:52.327734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.875 "name": "Existed_Raid", 00:10:18.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.875 "strip_size_kb": 64, 00:10:18.875 "state": "configuring", 00:10:18.875 "raid_level": "concat", 00:10:18.875 "superblock": false, 00:10:18.875 "num_base_bdevs": 4, 00:10:18.875 "num_base_bdevs_discovered": 3, 00:10:18.875 "num_base_bdevs_operational": 4, 00:10:18.875 "base_bdevs_list": [ 00:10:18.875 { 00:10:18.875 "name": null, 00:10:18.875 "uuid": "d1917ad1-622c-48ea-99f8-cc863fb4096a", 00:10:18.875 "is_configured": false, 00:10:18.875 "data_offset": 0, 00:10:18.875 "data_size": 65536 00:10:18.875 }, 00:10:18.875 { 00:10:18.875 "name": "BaseBdev2", 00:10:18.875 "uuid": "0e6df190-fba3-4497-9771-b3a42b4f2ef7", 00:10:18.875 "is_configured": true, 00:10:18.875 "data_offset": 0, 00:10:18.875 "data_size": 65536 00:10:18.875 }, 00:10:18.875 { 00:10:18.875 "name": "BaseBdev3", 00:10:18.875 "uuid": "0e9f66d4-07a8-46ef-bcf1-0270547e4e53", 00:10:18.875 "is_configured": true, 00:10:18.875 "data_offset": 0, 00:10:18.875 "data_size": 65536 00:10:18.875 }, 00:10:18.875 { 00:10:18.875 "name": "BaseBdev4", 00:10:18.875 "uuid": "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b", 00:10:18.875 "is_configured": true, 00:10:18.875 "data_offset": 0, 00:10:18.875 "data_size": 65536 00:10:18.875 } 00:10:18.875 ] 00:10:18.875 }' 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.875 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.135 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.135 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.135 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.135 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.135 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d1917ad1-622c-48ea-99f8-cc863fb4096a 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.395 [2024-11-28 02:25:52.885512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:19.395 [2024-11-28 02:25:52.885587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:19.395 [2024-11-28 02:25:52.885595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:19.395 [2024-11-28 02:25:52.885882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:19.395 [2024-11-28 02:25:52.886077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:19.395 [2024-11-28 02:25:52.886091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:19.395 [2024-11-28 02:25:52.886364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.395 NewBaseBdev 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.395 [ 00:10:19.395 { 00:10:19.395 "name": "NewBaseBdev", 00:10:19.395 "aliases": [ 00:10:19.395 "d1917ad1-622c-48ea-99f8-cc863fb4096a" 00:10:19.395 ], 00:10:19.395 "product_name": "Malloc disk", 00:10:19.395 "block_size": 512, 00:10:19.395 "num_blocks": 65536, 00:10:19.395 "uuid": "d1917ad1-622c-48ea-99f8-cc863fb4096a", 00:10:19.395 "assigned_rate_limits": { 00:10:19.395 "rw_ios_per_sec": 0, 00:10:19.395 "rw_mbytes_per_sec": 0, 00:10:19.395 "r_mbytes_per_sec": 0, 00:10:19.395 "w_mbytes_per_sec": 0 00:10:19.395 }, 00:10:19.395 "claimed": true, 00:10:19.395 "claim_type": "exclusive_write", 00:10:19.395 "zoned": false, 00:10:19.395 "supported_io_types": { 00:10:19.395 "read": true, 00:10:19.395 "write": true, 00:10:19.395 "unmap": true, 00:10:19.395 "flush": true, 00:10:19.395 "reset": true, 00:10:19.395 "nvme_admin": false, 00:10:19.395 "nvme_io": false, 00:10:19.395 "nvme_io_md": false, 00:10:19.395 "write_zeroes": true, 00:10:19.395 "zcopy": true, 00:10:19.395 "get_zone_info": false, 00:10:19.395 "zone_management": false, 00:10:19.395 "zone_append": false, 00:10:19.395 "compare": false, 00:10:19.395 "compare_and_write": false, 00:10:19.395 "abort": true, 00:10:19.395 "seek_hole": false, 00:10:19.395 "seek_data": false, 00:10:19.395 "copy": true, 00:10:19.395 "nvme_iov_md": false 00:10:19.395 }, 00:10:19.395 "memory_domains": [ 00:10:19.395 { 00:10:19.395 "dma_device_id": "system", 00:10:19.395 "dma_device_type": 1 00:10:19.395 }, 00:10:19.395 { 00:10:19.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.395 "dma_device_type": 2 00:10:19.395 } 00:10:19.395 ], 00:10:19.395 "driver_specific": {} 00:10:19.395 } 00:10:19.395 ] 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.395 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.396 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.396 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.396 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.396 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.396 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.396 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.396 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.396 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.396 "name": "Existed_Raid", 00:10:19.396 "uuid": "e02cfcba-e101-482d-a1b4-2d61540961a9", 00:10:19.396 "strip_size_kb": 64, 00:10:19.396 "state": "online", 00:10:19.396 "raid_level": "concat", 00:10:19.396 "superblock": false, 00:10:19.396 "num_base_bdevs": 4, 00:10:19.396 "num_base_bdevs_discovered": 4, 00:10:19.396 "num_base_bdevs_operational": 4, 00:10:19.396 "base_bdevs_list": [ 00:10:19.396 { 00:10:19.396 "name": "NewBaseBdev", 00:10:19.396 "uuid": "d1917ad1-622c-48ea-99f8-cc863fb4096a", 00:10:19.396 "is_configured": true, 00:10:19.396 "data_offset": 0, 00:10:19.396 "data_size": 65536 00:10:19.396 }, 00:10:19.396 { 00:10:19.396 "name": "BaseBdev2", 00:10:19.396 "uuid": "0e6df190-fba3-4497-9771-b3a42b4f2ef7", 00:10:19.396 "is_configured": true, 00:10:19.396 "data_offset": 0, 00:10:19.396 "data_size": 65536 00:10:19.396 }, 00:10:19.396 { 00:10:19.396 "name": "BaseBdev3", 00:10:19.396 "uuid": "0e9f66d4-07a8-46ef-bcf1-0270547e4e53", 00:10:19.396 "is_configured": true, 00:10:19.396 "data_offset": 0, 00:10:19.396 "data_size": 65536 00:10:19.396 }, 00:10:19.396 { 00:10:19.396 "name": "BaseBdev4", 00:10:19.396 "uuid": "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b", 00:10:19.396 "is_configured": true, 00:10:19.396 "data_offset": 0, 00:10:19.396 "data_size": 65536 00:10:19.396 } 00:10:19.396 ] 00:10:19.396 }' 00:10:19.396 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.396 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.966 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.966 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.966 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.966 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.966 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.966 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.966 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.967 [2024-11-28 02:25:53.381175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.967 "name": "Existed_Raid", 00:10:19.967 "aliases": [ 00:10:19.967 "e02cfcba-e101-482d-a1b4-2d61540961a9" 00:10:19.967 ], 00:10:19.967 "product_name": "Raid Volume", 00:10:19.967 "block_size": 512, 00:10:19.967 "num_blocks": 262144, 00:10:19.967 "uuid": "e02cfcba-e101-482d-a1b4-2d61540961a9", 00:10:19.967 "assigned_rate_limits": { 00:10:19.967 "rw_ios_per_sec": 0, 00:10:19.967 "rw_mbytes_per_sec": 0, 00:10:19.967 "r_mbytes_per_sec": 0, 00:10:19.967 "w_mbytes_per_sec": 0 00:10:19.967 }, 00:10:19.967 "claimed": false, 00:10:19.967 "zoned": false, 00:10:19.967 "supported_io_types": { 00:10:19.967 "read": true, 00:10:19.967 "write": true, 00:10:19.967 "unmap": true, 00:10:19.967 "flush": true, 00:10:19.967 "reset": true, 00:10:19.967 "nvme_admin": false, 00:10:19.967 "nvme_io": false, 00:10:19.967 "nvme_io_md": false, 00:10:19.967 "write_zeroes": true, 00:10:19.967 "zcopy": false, 00:10:19.967 "get_zone_info": false, 00:10:19.967 "zone_management": false, 00:10:19.967 "zone_append": false, 00:10:19.967 "compare": false, 00:10:19.967 "compare_and_write": false, 00:10:19.967 "abort": false, 00:10:19.967 "seek_hole": false, 00:10:19.967 "seek_data": false, 00:10:19.967 "copy": false, 00:10:19.967 "nvme_iov_md": false 00:10:19.967 }, 00:10:19.967 "memory_domains": [ 00:10:19.967 { 00:10:19.967 "dma_device_id": "system", 00:10:19.967 "dma_device_type": 1 00:10:19.967 }, 00:10:19.967 { 00:10:19.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.967 "dma_device_type": 2 00:10:19.967 }, 00:10:19.967 { 00:10:19.967 "dma_device_id": "system", 00:10:19.967 "dma_device_type": 1 00:10:19.967 }, 00:10:19.967 { 00:10:19.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.967 "dma_device_type": 2 00:10:19.967 }, 00:10:19.967 { 00:10:19.967 "dma_device_id": "system", 00:10:19.967 "dma_device_type": 1 00:10:19.967 }, 00:10:19.967 { 00:10:19.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.967 "dma_device_type": 2 00:10:19.967 }, 00:10:19.967 { 00:10:19.967 "dma_device_id": "system", 00:10:19.967 "dma_device_type": 1 00:10:19.967 }, 00:10:19.967 { 00:10:19.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.967 "dma_device_type": 2 00:10:19.967 } 00:10:19.967 ], 00:10:19.967 "driver_specific": { 00:10:19.967 "raid": { 00:10:19.967 "uuid": "e02cfcba-e101-482d-a1b4-2d61540961a9", 00:10:19.967 "strip_size_kb": 64, 00:10:19.967 "state": "online", 00:10:19.967 "raid_level": "concat", 00:10:19.967 "superblock": false, 00:10:19.967 "num_base_bdevs": 4, 00:10:19.967 "num_base_bdevs_discovered": 4, 00:10:19.967 "num_base_bdevs_operational": 4, 00:10:19.967 "base_bdevs_list": [ 00:10:19.967 { 00:10:19.967 "name": "NewBaseBdev", 00:10:19.967 "uuid": "d1917ad1-622c-48ea-99f8-cc863fb4096a", 00:10:19.967 "is_configured": true, 00:10:19.967 "data_offset": 0, 00:10:19.967 "data_size": 65536 00:10:19.967 }, 00:10:19.967 { 00:10:19.967 "name": "BaseBdev2", 00:10:19.967 "uuid": "0e6df190-fba3-4497-9771-b3a42b4f2ef7", 00:10:19.967 "is_configured": true, 00:10:19.967 "data_offset": 0, 00:10:19.967 "data_size": 65536 00:10:19.967 }, 00:10:19.967 { 00:10:19.967 "name": "BaseBdev3", 00:10:19.967 "uuid": "0e9f66d4-07a8-46ef-bcf1-0270547e4e53", 00:10:19.967 "is_configured": true, 00:10:19.967 "data_offset": 0, 00:10:19.967 "data_size": 65536 00:10:19.967 }, 00:10:19.967 { 00:10:19.967 "name": "BaseBdev4", 00:10:19.967 "uuid": "66d5842e-78bb-4d00-a93d-4f2c7ba5f07b", 00:10:19.967 "is_configured": true, 00:10:19.967 "data_offset": 0, 00:10:19.967 "data_size": 65536 00:10:19.967 } 00:10:19.967 ] 00:10:19.967 } 00:10:19.967 } 00:10:19.967 }' 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:19.967 BaseBdev2 00:10:19.967 BaseBdev3 00:10:19.967 BaseBdev4' 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.967 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.968 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.968 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.968 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.968 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:19.968 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.968 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.968 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.968 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.228 [2024-11-28 02:25:53.712188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:20.228 [2024-11-28 02:25:53.712277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.228 [2024-11-28 02:25:53.712367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.228 [2024-11-28 02:25:53.712444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.228 [2024-11-28 02:25:53.712455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71067 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71067 ']' 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71067 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71067 00:10:20.228 killing process with pid 71067 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71067' 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71067 00:10:20.228 [2024-11-28 02:25:53.760299] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.228 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71067 00:10:20.488 [2024-11-28 02:25:54.153235] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.870 02:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:21.870 00:10:21.870 real 0m11.145s 00:10:21.870 user 0m17.637s 00:10:21.870 sys 0m1.955s 00:10:21.870 02:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.870 ************************************ 00:10:21.870 END TEST raid_state_function_test 00:10:21.870 ************************************ 00:10:21.870 02:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.870 02:25:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:21.870 02:25:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:21.870 02:25:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.870 02:25:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.870 ************************************ 00:10:21.870 START TEST raid_state_function_test_sb 00:10:21.871 ************************************ 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71734 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71734' 00:10:21.871 Process raid pid: 71734 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71734 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71734 ']' 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.871 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.871 [2024-11-28 02:25:55.431986] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:21.871 [2024-11-28 02:25:55.432184] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.131 [2024-11-28 02:25:55.605689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.131 [2024-11-28 02:25:55.718845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.392 [2024-11-28 02:25:55.927390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.392 [2024-11-28 02:25:55.927489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.652 [2024-11-28 02:25:56.263607] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.652 [2024-11-28 02:25:56.263671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.652 [2024-11-28 02:25:56.263683] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.652 [2024-11-28 02:25:56.263694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.652 [2024-11-28 02:25:56.263708] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.652 [2024-11-28 02:25:56.263720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.652 [2024-11-28 02:25:56.263727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:22.652 [2024-11-28 02:25:56.263738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.652 "name": "Existed_Raid", 00:10:22.652 "uuid": "af27ed6d-26d6-47d8-87e0-a97dd8657f30", 00:10:22.652 "strip_size_kb": 64, 00:10:22.652 "state": "configuring", 00:10:22.652 "raid_level": "concat", 00:10:22.652 "superblock": true, 00:10:22.652 "num_base_bdevs": 4, 00:10:22.652 "num_base_bdevs_discovered": 0, 00:10:22.652 "num_base_bdevs_operational": 4, 00:10:22.652 "base_bdevs_list": [ 00:10:22.652 { 00:10:22.652 "name": "BaseBdev1", 00:10:22.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.652 "is_configured": false, 00:10:22.652 "data_offset": 0, 00:10:22.652 "data_size": 0 00:10:22.652 }, 00:10:22.652 { 00:10:22.652 "name": "BaseBdev2", 00:10:22.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.652 "is_configured": false, 00:10:22.652 "data_offset": 0, 00:10:22.652 "data_size": 0 00:10:22.652 }, 00:10:22.652 { 00:10:22.652 "name": "BaseBdev3", 00:10:22.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.652 "is_configured": false, 00:10:22.652 "data_offset": 0, 00:10:22.652 "data_size": 0 00:10:22.652 }, 00:10:22.652 { 00:10:22.652 "name": "BaseBdev4", 00:10:22.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.652 "is_configured": false, 00:10:22.652 "data_offset": 0, 00:10:22.652 "data_size": 0 00:10:22.652 } 00:10:22.652 ] 00:10:22.652 }' 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.652 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.222 [2024-11-28 02:25:56.686846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.222 [2024-11-28 02:25:56.686976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.222 [2024-11-28 02:25:56.698832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.222 [2024-11-28 02:25:56.698940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.222 [2024-11-28 02:25:56.698975] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.222 [2024-11-28 02:25:56.699005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.222 [2024-11-28 02:25:56.699027] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.222 [2024-11-28 02:25:56.699058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.222 [2024-11-28 02:25:56.699088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:23.222 [2024-11-28 02:25:56.699119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.222 [2024-11-28 02:25:56.745838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.222 BaseBdev1 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.222 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.222 [ 00:10:23.222 { 00:10:23.222 "name": "BaseBdev1", 00:10:23.222 "aliases": [ 00:10:23.222 "35b5bcf1-6652-40bf-a951-b2c099430084" 00:10:23.222 ], 00:10:23.222 "product_name": "Malloc disk", 00:10:23.222 "block_size": 512, 00:10:23.222 "num_blocks": 65536, 00:10:23.222 "uuid": "35b5bcf1-6652-40bf-a951-b2c099430084", 00:10:23.222 "assigned_rate_limits": { 00:10:23.222 "rw_ios_per_sec": 0, 00:10:23.222 "rw_mbytes_per_sec": 0, 00:10:23.222 "r_mbytes_per_sec": 0, 00:10:23.222 "w_mbytes_per_sec": 0 00:10:23.222 }, 00:10:23.222 "claimed": true, 00:10:23.222 "claim_type": "exclusive_write", 00:10:23.222 "zoned": false, 00:10:23.223 "supported_io_types": { 00:10:23.223 "read": true, 00:10:23.223 "write": true, 00:10:23.223 "unmap": true, 00:10:23.223 "flush": true, 00:10:23.223 "reset": true, 00:10:23.223 "nvme_admin": false, 00:10:23.223 "nvme_io": false, 00:10:23.223 "nvme_io_md": false, 00:10:23.223 "write_zeroes": true, 00:10:23.223 "zcopy": true, 00:10:23.223 "get_zone_info": false, 00:10:23.223 "zone_management": false, 00:10:23.223 "zone_append": false, 00:10:23.223 "compare": false, 00:10:23.223 "compare_and_write": false, 00:10:23.223 "abort": true, 00:10:23.223 "seek_hole": false, 00:10:23.223 "seek_data": false, 00:10:23.223 "copy": true, 00:10:23.223 "nvme_iov_md": false 00:10:23.223 }, 00:10:23.223 "memory_domains": [ 00:10:23.223 { 00:10:23.223 "dma_device_id": "system", 00:10:23.223 "dma_device_type": 1 00:10:23.223 }, 00:10:23.223 { 00:10:23.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.223 "dma_device_type": 2 00:10:23.223 } 00:10:23.223 ], 00:10:23.223 "driver_specific": {} 00:10:23.223 } 00:10:23.223 ] 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.223 "name": "Existed_Raid", 00:10:23.223 "uuid": "947c663e-a671-4c6c-8939-d8d0d4d06d1c", 00:10:23.223 "strip_size_kb": 64, 00:10:23.223 "state": "configuring", 00:10:23.223 "raid_level": "concat", 00:10:23.223 "superblock": true, 00:10:23.223 "num_base_bdevs": 4, 00:10:23.223 "num_base_bdevs_discovered": 1, 00:10:23.223 "num_base_bdevs_operational": 4, 00:10:23.223 "base_bdevs_list": [ 00:10:23.223 { 00:10:23.223 "name": "BaseBdev1", 00:10:23.223 "uuid": "35b5bcf1-6652-40bf-a951-b2c099430084", 00:10:23.223 "is_configured": true, 00:10:23.223 "data_offset": 2048, 00:10:23.223 "data_size": 63488 00:10:23.223 }, 00:10:23.223 { 00:10:23.223 "name": "BaseBdev2", 00:10:23.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.223 "is_configured": false, 00:10:23.223 "data_offset": 0, 00:10:23.223 "data_size": 0 00:10:23.223 }, 00:10:23.223 { 00:10:23.223 "name": "BaseBdev3", 00:10:23.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.223 "is_configured": false, 00:10:23.223 "data_offset": 0, 00:10:23.223 "data_size": 0 00:10:23.223 }, 00:10:23.223 { 00:10:23.223 "name": "BaseBdev4", 00:10:23.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.223 "is_configured": false, 00:10:23.223 "data_offset": 0, 00:10:23.223 "data_size": 0 00:10:23.223 } 00:10:23.223 ] 00:10:23.223 }' 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.223 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.793 [2024-11-28 02:25:57.213094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.793 [2024-11-28 02:25:57.213157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.793 [2024-11-28 02:25:57.221154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.793 [2024-11-28 02:25:57.222999] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.793 [2024-11-28 02:25:57.223090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.793 [2024-11-28 02:25:57.223124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.793 [2024-11-28 02:25:57.223155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.793 [2024-11-28 02:25:57.223179] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:23.793 [2024-11-28 02:25:57.223205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.793 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.793 "name": "Existed_Raid", 00:10:23.793 "uuid": "12c35a68-8408-4516-8a68-4ef74996e351", 00:10:23.793 "strip_size_kb": 64, 00:10:23.793 "state": "configuring", 00:10:23.793 "raid_level": "concat", 00:10:23.793 "superblock": true, 00:10:23.793 "num_base_bdevs": 4, 00:10:23.793 "num_base_bdevs_discovered": 1, 00:10:23.793 "num_base_bdevs_operational": 4, 00:10:23.793 "base_bdevs_list": [ 00:10:23.793 { 00:10:23.793 "name": "BaseBdev1", 00:10:23.794 "uuid": "35b5bcf1-6652-40bf-a951-b2c099430084", 00:10:23.794 "is_configured": true, 00:10:23.794 "data_offset": 2048, 00:10:23.794 "data_size": 63488 00:10:23.794 }, 00:10:23.794 { 00:10:23.794 "name": "BaseBdev2", 00:10:23.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.794 "is_configured": false, 00:10:23.794 "data_offset": 0, 00:10:23.794 "data_size": 0 00:10:23.794 }, 00:10:23.794 { 00:10:23.794 "name": "BaseBdev3", 00:10:23.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.794 "is_configured": false, 00:10:23.794 "data_offset": 0, 00:10:23.794 "data_size": 0 00:10:23.794 }, 00:10:23.794 { 00:10:23.794 "name": "BaseBdev4", 00:10:23.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.794 "is_configured": false, 00:10:23.794 "data_offset": 0, 00:10:23.794 "data_size": 0 00:10:23.794 } 00:10:23.794 ] 00:10:23.794 }' 00:10:23.794 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.794 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.054 [2024-11-28 02:25:57.711254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.054 BaseBdev2 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.054 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.315 [ 00:10:24.315 { 00:10:24.315 "name": "BaseBdev2", 00:10:24.315 "aliases": [ 00:10:24.315 "82f0d421-d661-42f7-b314-953a0eea197c" 00:10:24.315 ], 00:10:24.315 "product_name": "Malloc disk", 00:10:24.315 "block_size": 512, 00:10:24.315 "num_blocks": 65536, 00:10:24.315 "uuid": "82f0d421-d661-42f7-b314-953a0eea197c", 00:10:24.315 "assigned_rate_limits": { 00:10:24.315 "rw_ios_per_sec": 0, 00:10:24.315 "rw_mbytes_per_sec": 0, 00:10:24.315 "r_mbytes_per_sec": 0, 00:10:24.315 "w_mbytes_per_sec": 0 00:10:24.315 }, 00:10:24.315 "claimed": true, 00:10:24.315 "claim_type": "exclusive_write", 00:10:24.315 "zoned": false, 00:10:24.315 "supported_io_types": { 00:10:24.315 "read": true, 00:10:24.315 "write": true, 00:10:24.315 "unmap": true, 00:10:24.315 "flush": true, 00:10:24.315 "reset": true, 00:10:24.315 "nvme_admin": false, 00:10:24.315 "nvme_io": false, 00:10:24.315 "nvme_io_md": false, 00:10:24.315 "write_zeroes": true, 00:10:24.315 "zcopy": true, 00:10:24.315 "get_zone_info": false, 00:10:24.315 "zone_management": false, 00:10:24.315 "zone_append": false, 00:10:24.315 "compare": false, 00:10:24.315 "compare_and_write": false, 00:10:24.315 "abort": true, 00:10:24.315 "seek_hole": false, 00:10:24.315 "seek_data": false, 00:10:24.315 "copy": true, 00:10:24.315 "nvme_iov_md": false 00:10:24.315 }, 00:10:24.315 "memory_domains": [ 00:10:24.315 { 00:10:24.315 "dma_device_id": "system", 00:10:24.315 "dma_device_type": 1 00:10:24.315 }, 00:10:24.315 { 00:10:24.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.315 "dma_device_type": 2 00:10:24.315 } 00:10:24.315 ], 00:10:24.315 "driver_specific": {} 00:10:24.315 } 00:10:24.315 ] 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.315 "name": "Existed_Raid", 00:10:24.315 "uuid": "12c35a68-8408-4516-8a68-4ef74996e351", 00:10:24.315 "strip_size_kb": 64, 00:10:24.315 "state": "configuring", 00:10:24.315 "raid_level": "concat", 00:10:24.315 "superblock": true, 00:10:24.315 "num_base_bdevs": 4, 00:10:24.315 "num_base_bdevs_discovered": 2, 00:10:24.315 "num_base_bdevs_operational": 4, 00:10:24.315 "base_bdevs_list": [ 00:10:24.315 { 00:10:24.315 "name": "BaseBdev1", 00:10:24.315 "uuid": "35b5bcf1-6652-40bf-a951-b2c099430084", 00:10:24.315 "is_configured": true, 00:10:24.315 "data_offset": 2048, 00:10:24.315 "data_size": 63488 00:10:24.315 }, 00:10:24.315 { 00:10:24.315 "name": "BaseBdev2", 00:10:24.315 "uuid": "82f0d421-d661-42f7-b314-953a0eea197c", 00:10:24.315 "is_configured": true, 00:10:24.315 "data_offset": 2048, 00:10:24.315 "data_size": 63488 00:10:24.315 }, 00:10:24.315 { 00:10:24.315 "name": "BaseBdev3", 00:10:24.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.315 "is_configured": false, 00:10:24.315 "data_offset": 0, 00:10:24.315 "data_size": 0 00:10:24.315 }, 00:10:24.315 { 00:10:24.315 "name": "BaseBdev4", 00:10:24.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.315 "is_configured": false, 00:10:24.315 "data_offset": 0, 00:10:24.315 "data_size": 0 00:10:24.315 } 00:10:24.315 ] 00:10:24.315 }' 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.315 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.575 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:24.575 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.575 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.575 [2024-11-28 02:25:58.252638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.835 BaseBdev3 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.835 [ 00:10:24.835 { 00:10:24.835 "name": "BaseBdev3", 00:10:24.835 "aliases": [ 00:10:24.835 "7dd63688-a3c7-4809-9651-dc4a9e5ea4d3" 00:10:24.835 ], 00:10:24.835 "product_name": "Malloc disk", 00:10:24.835 "block_size": 512, 00:10:24.835 "num_blocks": 65536, 00:10:24.835 "uuid": "7dd63688-a3c7-4809-9651-dc4a9e5ea4d3", 00:10:24.835 "assigned_rate_limits": { 00:10:24.835 "rw_ios_per_sec": 0, 00:10:24.835 "rw_mbytes_per_sec": 0, 00:10:24.835 "r_mbytes_per_sec": 0, 00:10:24.835 "w_mbytes_per_sec": 0 00:10:24.835 }, 00:10:24.835 "claimed": true, 00:10:24.835 "claim_type": "exclusive_write", 00:10:24.835 "zoned": false, 00:10:24.835 "supported_io_types": { 00:10:24.835 "read": true, 00:10:24.835 "write": true, 00:10:24.835 "unmap": true, 00:10:24.835 "flush": true, 00:10:24.835 "reset": true, 00:10:24.835 "nvme_admin": false, 00:10:24.835 "nvme_io": false, 00:10:24.835 "nvme_io_md": false, 00:10:24.835 "write_zeroes": true, 00:10:24.835 "zcopy": true, 00:10:24.835 "get_zone_info": false, 00:10:24.835 "zone_management": false, 00:10:24.835 "zone_append": false, 00:10:24.835 "compare": false, 00:10:24.835 "compare_and_write": false, 00:10:24.835 "abort": true, 00:10:24.835 "seek_hole": false, 00:10:24.835 "seek_data": false, 00:10:24.835 "copy": true, 00:10:24.835 "nvme_iov_md": false 00:10:24.835 }, 00:10:24.835 "memory_domains": [ 00:10:24.835 { 00:10:24.835 "dma_device_id": "system", 00:10:24.835 "dma_device_type": 1 00:10:24.835 }, 00:10:24.835 { 00:10:24.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.835 "dma_device_type": 2 00:10:24.835 } 00:10:24.835 ], 00:10:24.835 "driver_specific": {} 00:10:24.835 } 00:10:24.835 ] 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.836 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.836 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.836 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.836 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.836 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.836 "name": "Existed_Raid", 00:10:24.836 "uuid": "12c35a68-8408-4516-8a68-4ef74996e351", 00:10:24.836 "strip_size_kb": 64, 00:10:24.836 "state": "configuring", 00:10:24.836 "raid_level": "concat", 00:10:24.836 "superblock": true, 00:10:24.836 "num_base_bdevs": 4, 00:10:24.836 "num_base_bdevs_discovered": 3, 00:10:24.836 "num_base_bdevs_operational": 4, 00:10:24.836 "base_bdevs_list": [ 00:10:24.836 { 00:10:24.836 "name": "BaseBdev1", 00:10:24.836 "uuid": "35b5bcf1-6652-40bf-a951-b2c099430084", 00:10:24.836 "is_configured": true, 00:10:24.836 "data_offset": 2048, 00:10:24.836 "data_size": 63488 00:10:24.836 }, 00:10:24.836 { 00:10:24.836 "name": "BaseBdev2", 00:10:24.836 "uuid": "82f0d421-d661-42f7-b314-953a0eea197c", 00:10:24.836 "is_configured": true, 00:10:24.836 "data_offset": 2048, 00:10:24.836 "data_size": 63488 00:10:24.836 }, 00:10:24.836 { 00:10:24.836 "name": "BaseBdev3", 00:10:24.836 "uuid": "7dd63688-a3c7-4809-9651-dc4a9e5ea4d3", 00:10:24.836 "is_configured": true, 00:10:24.836 "data_offset": 2048, 00:10:24.836 "data_size": 63488 00:10:24.836 }, 00:10:24.836 { 00:10:24.836 "name": "BaseBdev4", 00:10:24.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.836 "is_configured": false, 00:10:24.836 "data_offset": 0, 00:10:24.836 "data_size": 0 00:10:24.836 } 00:10:24.836 ] 00:10:24.836 }' 00:10:24.836 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.836 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.096 [2024-11-28 02:25:58.754575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:25.096 [2024-11-28 02:25:58.754975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:25.096 [2024-11-28 02:25:58.755053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:25.096 [2024-11-28 02:25:58.755362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:25.096 [2024-11-28 02:25:58.755577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:25.096 [2024-11-28 02:25:58.755631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:25.096 BaseBdev4 00:10:25.096 [2024-11-28 02:25:58.755837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.096 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.356 [ 00:10:25.356 { 00:10:25.356 "name": "BaseBdev4", 00:10:25.356 "aliases": [ 00:10:25.356 "7612ee74-4a00-4e76-aa66-f6ca4e488e1b" 00:10:25.356 ], 00:10:25.356 "product_name": "Malloc disk", 00:10:25.356 "block_size": 512, 00:10:25.356 "num_blocks": 65536, 00:10:25.356 "uuid": "7612ee74-4a00-4e76-aa66-f6ca4e488e1b", 00:10:25.356 "assigned_rate_limits": { 00:10:25.356 "rw_ios_per_sec": 0, 00:10:25.356 "rw_mbytes_per_sec": 0, 00:10:25.356 "r_mbytes_per_sec": 0, 00:10:25.356 "w_mbytes_per_sec": 0 00:10:25.356 }, 00:10:25.356 "claimed": true, 00:10:25.356 "claim_type": "exclusive_write", 00:10:25.356 "zoned": false, 00:10:25.356 "supported_io_types": { 00:10:25.356 "read": true, 00:10:25.356 "write": true, 00:10:25.356 "unmap": true, 00:10:25.356 "flush": true, 00:10:25.356 "reset": true, 00:10:25.356 "nvme_admin": false, 00:10:25.356 "nvme_io": false, 00:10:25.356 "nvme_io_md": false, 00:10:25.356 "write_zeroes": true, 00:10:25.356 "zcopy": true, 00:10:25.356 "get_zone_info": false, 00:10:25.356 "zone_management": false, 00:10:25.356 "zone_append": false, 00:10:25.356 "compare": false, 00:10:25.356 "compare_and_write": false, 00:10:25.356 "abort": true, 00:10:25.356 "seek_hole": false, 00:10:25.356 "seek_data": false, 00:10:25.356 "copy": true, 00:10:25.356 "nvme_iov_md": false 00:10:25.356 }, 00:10:25.356 "memory_domains": [ 00:10:25.356 { 00:10:25.356 "dma_device_id": "system", 00:10:25.356 "dma_device_type": 1 00:10:25.356 }, 00:10:25.356 { 00:10:25.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.356 "dma_device_type": 2 00:10:25.356 } 00:10:25.356 ], 00:10:25.356 "driver_specific": {} 00:10:25.356 } 00:10:25.356 ] 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.356 "name": "Existed_Raid", 00:10:25.356 "uuid": "12c35a68-8408-4516-8a68-4ef74996e351", 00:10:25.356 "strip_size_kb": 64, 00:10:25.356 "state": "online", 00:10:25.356 "raid_level": "concat", 00:10:25.356 "superblock": true, 00:10:25.356 "num_base_bdevs": 4, 00:10:25.356 "num_base_bdevs_discovered": 4, 00:10:25.356 "num_base_bdevs_operational": 4, 00:10:25.356 "base_bdevs_list": [ 00:10:25.356 { 00:10:25.356 "name": "BaseBdev1", 00:10:25.356 "uuid": "35b5bcf1-6652-40bf-a951-b2c099430084", 00:10:25.356 "is_configured": true, 00:10:25.356 "data_offset": 2048, 00:10:25.356 "data_size": 63488 00:10:25.356 }, 00:10:25.356 { 00:10:25.356 "name": "BaseBdev2", 00:10:25.356 "uuid": "82f0d421-d661-42f7-b314-953a0eea197c", 00:10:25.356 "is_configured": true, 00:10:25.356 "data_offset": 2048, 00:10:25.356 "data_size": 63488 00:10:25.356 }, 00:10:25.356 { 00:10:25.356 "name": "BaseBdev3", 00:10:25.356 "uuid": "7dd63688-a3c7-4809-9651-dc4a9e5ea4d3", 00:10:25.356 "is_configured": true, 00:10:25.356 "data_offset": 2048, 00:10:25.356 "data_size": 63488 00:10:25.356 }, 00:10:25.356 { 00:10:25.356 "name": "BaseBdev4", 00:10:25.356 "uuid": "7612ee74-4a00-4e76-aa66-f6ca4e488e1b", 00:10:25.356 "is_configured": true, 00:10:25.356 "data_offset": 2048, 00:10:25.356 "data_size": 63488 00:10:25.356 } 00:10:25.356 ] 00:10:25.356 }' 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.356 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.617 [2024-11-28 02:25:59.214353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.617 "name": "Existed_Raid", 00:10:25.617 "aliases": [ 00:10:25.617 "12c35a68-8408-4516-8a68-4ef74996e351" 00:10:25.617 ], 00:10:25.617 "product_name": "Raid Volume", 00:10:25.617 "block_size": 512, 00:10:25.617 "num_blocks": 253952, 00:10:25.617 "uuid": "12c35a68-8408-4516-8a68-4ef74996e351", 00:10:25.617 "assigned_rate_limits": { 00:10:25.617 "rw_ios_per_sec": 0, 00:10:25.617 "rw_mbytes_per_sec": 0, 00:10:25.617 "r_mbytes_per_sec": 0, 00:10:25.617 "w_mbytes_per_sec": 0 00:10:25.617 }, 00:10:25.617 "claimed": false, 00:10:25.617 "zoned": false, 00:10:25.617 "supported_io_types": { 00:10:25.617 "read": true, 00:10:25.617 "write": true, 00:10:25.617 "unmap": true, 00:10:25.617 "flush": true, 00:10:25.617 "reset": true, 00:10:25.617 "nvme_admin": false, 00:10:25.617 "nvme_io": false, 00:10:25.617 "nvme_io_md": false, 00:10:25.617 "write_zeroes": true, 00:10:25.617 "zcopy": false, 00:10:25.617 "get_zone_info": false, 00:10:25.617 "zone_management": false, 00:10:25.617 "zone_append": false, 00:10:25.617 "compare": false, 00:10:25.617 "compare_and_write": false, 00:10:25.617 "abort": false, 00:10:25.617 "seek_hole": false, 00:10:25.617 "seek_data": false, 00:10:25.617 "copy": false, 00:10:25.617 "nvme_iov_md": false 00:10:25.617 }, 00:10:25.617 "memory_domains": [ 00:10:25.617 { 00:10:25.617 "dma_device_id": "system", 00:10:25.617 "dma_device_type": 1 00:10:25.617 }, 00:10:25.617 { 00:10:25.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.617 "dma_device_type": 2 00:10:25.617 }, 00:10:25.617 { 00:10:25.617 "dma_device_id": "system", 00:10:25.617 "dma_device_type": 1 00:10:25.617 }, 00:10:25.617 { 00:10:25.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.617 "dma_device_type": 2 00:10:25.617 }, 00:10:25.617 { 00:10:25.617 "dma_device_id": "system", 00:10:25.617 "dma_device_type": 1 00:10:25.617 }, 00:10:25.617 { 00:10:25.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.617 "dma_device_type": 2 00:10:25.617 }, 00:10:25.617 { 00:10:25.617 "dma_device_id": "system", 00:10:25.617 "dma_device_type": 1 00:10:25.617 }, 00:10:25.617 { 00:10:25.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.617 "dma_device_type": 2 00:10:25.617 } 00:10:25.617 ], 00:10:25.617 "driver_specific": { 00:10:25.617 "raid": { 00:10:25.617 "uuid": "12c35a68-8408-4516-8a68-4ef74996e351", 00:10:25.617 "strip_size_kb": 64, 00:10:25.617 "state": "online", 00:10:25.617 "raid_level": "concat", 00:10:25.617 "superblock": true, 00:10:25.617 "num_base_bdevs": 4, 00:10:25.617 "num_base_bdevs_discovered": 4, 00:10:25.617 "num_base_bdevs_operational": 4, 00:10:25.617 "base_bdevs_list": [ 00:10:25.617 { 00:10:25.617 "name": "BaseBdev1", 00:10:25.617 "uuid": "35b5bcf1-6652-40bf-a951-b2c099430084", 00:10:25.617 "is_configured": true, 00:10:25.617 "data_offset": 2048, 00:10:25.617 "data_size": 63488 00:10:25.617 }, 00:10:25.617 { 00:10:25.617 "name": "BaseBdev2", 00:10:25.617 "uuid": "82f0d421-d661-42f7-b314-953a0eea197c", 00:10:25.617 "is_configured": true, 00:10:25.617 "data_offset": 2048, 00:10:25.617 "data_size": 63488 00:10:25.617 }, 00:10:25.617 { 00:10:25.617 "name": "BaseBdev3", 00:10:25.617 "uuid": "7dd63688-a3c7-4809-9651-dc4a9e5ea4d3", 00:10:25.617 "is_configured": true, 00:10:25.617 "data_offset": 2048, 00:10:25.617 "data_size": 63488 00:10:25.617 }, 00:10:25.617 { 00:10:25.617 "name": "BaseBdev4", 00:10:25.617 "uuid": "7612ee74-4a00-4e76-aa66-f6ca4e488e1b", 00:10:25.617 "is_configured": true, 00:10:25.617 "data_offset": 2048, 00:10:25.617 "data_size": 63488 00:10:25.617 } 00:10:25.617 ] 00:10:25.617 } 00:10:25.617 } 00:10:25.617 }' 00:10:25.617 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:25.877 BaseBdev2 00:10:25.877 BaseBdev3 00:10:25.877 BaseBdev4' 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.877 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.878 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.878 [2024-11-28 02:25:59.525445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.878 [2024-11-28 02:25:59.525527] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.878 [2024-11-28 02:25:59.525603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.191 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.192 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.192 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.192 "name": "Existed_Raid", 00:10:26.192 "uuid": "12c35a68-8408-4516-8a68-4ef74996e351", 00:10:26.192 "strip_size_kb": 64, 00:10:26.192 "state": "offline", 00:10:26.192 "raid_level": "concat", 00:10:26.192 "superblock": true, 00:10:26.192 "num_base_bdevs": 4, 00:10:26.192 "num_base_bdevs_discovered": 3, 00:10:26.192 "num_base_bdevs_operational": 3, 00:10:26.192 "base_bdevs_list": [ 00:10:26.192 { 00:10:26.192 "name": null, 00:10:26.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.192 "is_configured": false, 00:10:26.192 "data_offset": 0, 00:10:26.192 "data_size": 63488 00:10:26.192 }, 00:10:26.192 { 00:10:26.192 "name": "BaseBdev2", 00:10:26.192 "uuid": "82f0d421-d661-42f7-b314-953a0eea197c", 00:10:26.192 "is_configured": true, 00:10:26.192 "data_offset": 2048, 00:10:26.192 "data_size": 63488 00:10:26.192 }, 00:10:26.192 { 00:10:26.192 "name": "BaseBdev3", 00:10:26.192 "uuid": "7dd63688-a3c7-4809-9651-dc4a9e5ea4d3", 00:10:26.192 "is_configured": true, 00:10:26.192 "data_offset": 2048, 00:10:26.192 "data_size": 63488 00:10:26.192 }, 00:10:26.192 { 00:10:26.192 "name": "BaseBdev4", 00:10:26.192 "uuid": "7612ee74-4a00-4e76-aa66-f6ca4e488e1b", 00:10:26.192 "is_configured": true, 00:10:26.192 "data_offset": 2048, 00:10:26.192 "data_size": 63488 00:10:26.192 } 00:10:26.192 ] 00:10:26.192 }' 00:10:26.192 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.192 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.451 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.451 [2024-11-28 02:26:00.050099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.711 [2024-11-28 02:26:00.204565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.711 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.711 [2024-11-28 02:26:00.356915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:26.711 [2024-11-28 02:26:00.357040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.971 BaseBdev2 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.971 [ 00:10:26.971 { 00:10:26.971 "name": "BaseBdev2", 00:10:26.971 "aliases": [ 00:10:26.971 "30492edb-9e67-4eda-8e5f-a1fee2743035" 00:10:26.971 ], 00:10:26.971 "product_name": "Malloc disk", 00:10:26.971 "block_size": 512, 00:10:26.971 "num_blocks": 65536, 00:10:26.971 "uuid": "30492edb-9e67-4eda-8e5f-a1fee2743035", 00:10:26.971 "assigned_rate_limits": { 00:10:26.971 "rw_ios_per_sec": 0, 00:10:26.971 "rw_mbytes_per_sec": 0, 00:10:26.971 "r_mbytes_per_sec": 0, 00:10:26.971 "w_mbytes_per_sec": 0 00:10:26.971 }, 00:10:26.971 "claimed": false, 00:10:26.971 "zoned": false, 00:10:26.971 "supported_io_types": { 00:10:26.971 "read": true, 00:10:26.971 "write": true, 00:10:26.971 "unmap": true, 00:10:26.971 "flush": true, 00:10:26.971 "reset": true, 00:10:26.971 "nvme_admin": false, 00:10:26.971 "nvme_io": false, 00:10:26.971 "nvme_io_md": false, 00:10:26.971 "write_zeroes": true, 00:10:26.971 "zcopy": true, 00:10:26.971 "get_zone_info": false, 00:10:26.971 "zone_management": false, 00:10:26.971 "zone_append": false, 00:10:26.971 "compare": false, 00:10:26.971 "compare_and_write": false, 00:10:26.971 "abort": true, 00:10:26.971 "seek_hole": false, 00:10:26.971 "seek_data": false, 00:10:26.971 "copy": true, 00:10:26.971 "nvme_iov_md": false 00:10:26.971 }, 00:10:26.971 "memory_domains": [ 00:10:26.971 { 00:10:26.971 "dma_device_id": "system", 00:10:26.971 "dma_device_type": 1 00:10:26.971 }, 00:10:26.971 { 00:10:26.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.971 "dma_device_type": 2 00:10:26.971 } 00:10:26.971 ], 00:10:26.971 "driver_specific": {} 00:10:26.971 } 00:10:26.971 ] 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.971 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.972 BaseBdev3 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.972 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.232 [ 00:10:27.232 { 00:10:27.232 "name": "BaseBdev3", 00:10:27.232 "aliases": [ 00:10:27.232 "faee3857-da58-4243-a961-bcdef2d71da4" 00:10:27.232 ], 00:10:27.232 "product_name": "Malloc disk", 00:10:27.232 "block_size": 512, 00:10:27.232 "num_blocks": 65536, 00:10:27.232 "uuid": "faee3857-da58-4243-a961-bcdef2d71da4", 00:10:27.232 "assigned_rate_limits": { 00:10:27.232 "rw_ios_per_sec": 0, 00:10:27.232 "rw_mbytes_per_sec": 0, 00:10:27.232 "r_mbytes_per_sec": 0, 00:10:27.232 "w_mbytes_per_sec": 0 00:10:27.232 }, 00:10:27.232 "claimed": false, 00:10:27.232 "zoned": false, 00:10:27.232 "supported_io_types": { 00:10:27.232 "read": true, 00:10:27.232 "write": true, 00:10:27.232 "unmap": true, 00:10:27.232 "flush": true, 00:10:27.232 "reset": true, 00:10:27.232 "nvme_admin": false, 00:10:27.232 "nvme_io": false, 00:10:27.232 "nvme_io_md": false, 00:10:27.232 "write_zeroes": true, 00:10:27.232 "zcopy": true, 00:10:27.232 "get_zone_info": false, 00:10:27.232 "zone_management": false, 00:10:27.232 "zone_append": false, 00:10:27.232 "compare": false, 00:10:27.232 "compare_and_write": false, 00:10:27.232 "abort": true, 00:10:27.232 "seek_hole": false, 00:10:27.232 "seek_data": false, 00:10:27.232 "copy": true, 00:10:27.232 "nvme_iov_md": false 00:10:27.232 }, 00:10:27.232 "memory_domains": [ 00:10:27.232 { 00:10:27.232 "dma_device_id": "system", 00:10:27.232 "dma_device_type": 1 00:10:27.232 }, 00:10:27.232 { 00:10:27.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.232 "dma_device_type": 2 00:10:27.233 } 00:10:27.233 ], 00:10:27.233 "driver_specific": {} 00:10:27.233 } 00:10:27.233 ] 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.233 BaseBdev4 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.233 [ 00:10:27.233 { 00:10:27.233 "name": "BaseBdev4", 00:10:27.233 "aliases": [ 00:10:27.233 "9bf3542c-5857-4c0a-bf77-d702a61e8130" 00:10:27.233 ], 00:10:27.233 "product_name": "Malloc disk", 00:10:27.233 "block_size": 512, 00:10:27.233 "num_blocks": 65536, 00:10:27.233 "uuid": "9bf3542c-5857-4c0a-bf77-d702a61e8130", 00:10:27.233 "assigned_rate_limits": { 00:10:27.233 "rw_ios_per_sec": 0, 00:10:27.233 "rw_mbytes_per_sec": 0, 00:10:27.233 "r_mbytes_per_sec": 0, 00:10:27.233 "w_mbytes_per_sec": 0 00:10:27.233 }, 00:10:27.233 "claimed": false, 00:10:27.233 "zoned": false, 00:10:27.233 "supported_io_types": { 00:10:27.233 "read": true, 00:10:27.233 "write": true, 00:10:27.233 "unmap": true, 00:10:27.233 "flush": true, 00:10:27.233 "reset": true, 00:10:27.233 "nvme_admin": false, 00:10:27.233 "nvme_io": false, 00:10:27.233 "nvme_io_md": false, 00:10:27.233 "write_zeroes": true, 00:10:27.233 "zcopy": true, 00:10:27.233 "get_zone_info": false, 00:10:27.233 "zone_management": false, 00:10:27.233 "zone_append": false, 00:10:27.233 "compare": false, 00:10:27.233 "compare_and_write": false, 00:10:27.233 "abort": true, 00:10:27.233 "seek_hole": false, 00:10:27.233 "seek_data": false, 00:10:27.233 "copy": true, 00:10:27.233 "nvme_iov_md": false 00:10:27.233 }, 00:10:27.233 "memory_domains": [ 00:10:27.233 { 00:10:27.233 "dma_device_id": "system", 00:10:27.233 "dma_device_type": 1 00:10:27.233 }, 00:10:27.233 { 00:10:27.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.233 "dma_device_type": 2 00:10:27.233 } 00:10:27.233 ], 00:10:27.233 "driver_specific": {} 00:10:27.233 } 00:10:27.233 ] 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.233 [2024-11-28 02:26:00.753960] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.233 [2024-11-28 02:26:00.754054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.233 [2024-11-28 02:26:00.754101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.233 [2024-11-28 02:26:00.755888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.233 [2024-11-28 02:26:00.756010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.233 "name": "Existed_Raid", 00:10:27.233 "uuid": "ab69a050-f145-4b68-b5e9-6298f1151426", 00:10:27.233 "strip_size_kb": 64, 00:10:27.233 "state": "configuring", 00:10:27.233 "raid_level": "concat", 00:10:27.233 "superblock": true, 00:10:27.233 "num_base_bdevs": 4, 00:10:27.233 "num_base_bdevs_discovered": 3, 00:10:27.233 "num_base_bdevs_operational": 4, 00:10:27.233 "base_bdevs_list": [ 00:10:27.233 { 00:10:27.233 "name": "BaseBdev1", 00:10:27.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.233 "is_configured": false, 00:10:27.233 "data_offset": 0, 00:10:27.233 "data_size": 0 00:10:27.233 }, 00:10:27.233 { 00:10:27.233 "name": "BaseBdev2", 00:10:27.233 "uuid": "30492edb-9e67-4eda-8e5f-a1fee2743035", 00:10:27.233 "is_configured": true, 00:10:27.233 "data_offset": 2048, 00:10:27.233 "data_size": 63488 00:10:27.233 }, 00:10:27.233 { 00:10:27.233 "name": "BaseBdev3", 00:10:27.233 "uuid": "faee3857-da58-4243-a961-bcdef2d71da4", 00:10:27.233 "is_configured": true, 00:10:27.233 "data_offset": 2048, 00:10:27.233 "data_size": 63488 00:10:27.233 }, 00:10:27.233 { 00:10:27.233 "name": "BaseBdev4", 00:10:27.233 "uuid": "9bf3542c-5857-4c0a-bf77-d702a61e8130", 00:10:27.233 "is_configured": true, 00:10:27.233 "data_offset": 2048, 00:10:27.233 "data_size": 63488 00:10:27.233 } 00:10:27.233 ] 00:10:27.233 }' 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.233 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.509 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:27.509 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.509 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.772 [2024-11-28 02:26:01.189161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.772 "name": "Existed_Raid", 00:10:27.772 "uuid": "ab69a050-f145-4b68-b5e9-6298f1151426", 00:10:27.772 "strip_size_kb": 64, 00:10:27.772 "state": "configuring", 00:10:27.772 "raid_level": "concat", 00:10:27.772 "superblock": true, 00:10:27.772 "num_base_bdevs": 4, 00:10:27.772 "num_base_bdevs_discovered": 2, 00:10:27.772 "num_base_bdevs_operational": 4, 00:10:27.772 "base_bdevs_list": [ 00:10:27.772 { 00:10:27.772 "name": "BaseBdev1", 00:10:27.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.772 "is_configured": false, 00:10:27.772 "data_offset": 0, 00:10:27.772 "data_size": 0 00:10:27.772 }, 00:10:27.772 { 00:10:27.772 "name": null, 00:10:27.772 "uuid": "30492edb-9e67-4eda-8e5f-a1fee2743035", 00:10:27.772 "is_configured": false, 00:10:27.772 "data_offset": 0, 00:10:27.772 "data_size": 63488 00:10:27.772 }, 00:10:27.772 { 00:10:27.772 "name": "BaseBdev3", 00:10:27.772 "uuid": "faee3857-da58-4243-a961-bcdef2d71da4", 00:10:27.772 "is_configured": true, 00:10:27.772 "data_offset": 2048, 00:10:27.772 "data_size": 63488 00:10:27.772 }, 00:10:27.772 { 00:10:27.772 "name": "BaseBdev4", 00:10:27.772 "uuid": "9bf3542c-5857-4c0a-bf77-d702a61e8130", 00:10:27.772 "is_configured": true, 00:10:27.772 "data_offset": 2048, 00:10:27.772 "data_size": 63488 00:10:27.772 } 00:10:27.772 ] 00:10:27.772 }' 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.772 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.030 [2024-11-28 02:26:01.701467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.030 BaseBdev1 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.030 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.288 [ 00:10:28.288 { 00:10:28.288 "name": "BaseBdev1", 00:10:28.288 "aliases": [ 00:10:28.288 "cc2ac67a-efec-4dfc-8bb3-53322ec1274a" 00:10:28.288 ], 00:10:28.288 "product_name": "Malloc disk", 00:10:28.288 "block_size": 512, 00:10:28.288 "num_blocks": 65536, 00:10:28.288 "uuid": "cc2ac67a-efec-4dfc-8bb3-53322ec1274a", 00:10:28.288 "assigned_rate_limits": { 00:10:28.288 "rw_ios_per_sec": 0, 00:10:28.288 "rw_mbytes_per_sec": 0, 00:10:28.288 "r_mbytes_per_sec": 0, 00:10:28.288 "w_mbytes_per_sec": 0 00:10:28.288 }, 00:10:28.288 "claimed": true, 00:10:28.288 "claim_type": "exclusive_write", 00:10:28.288 "zoned": false, 00:10:28.288 "supported_io_types": { 00:10:28.288 "read": true, 00:10:28.288 "write": true, 00:10:28.288 "unmap": true, 00:10:28.288 "flush": true, 00:10:28.288 "reset": true, 00:10:28.288 "nvme_admin": false, 00:10:28.288 "nvme_io": false, 00:10:28.288 "nvme_io_md": false, 00:10:28.288 "write_zeroes": true, 00:10:28.288 "zcopy": true, 00:10:28.288 "get_zone_info": false, 00:10:28.288 "zone_management": false, 00:10:28.288 "zone_append": false, 00:10:28.288 "compare": false, 00:10:28.288 "compare_and_write": false, 00:10:28.288 "abort": true, 00:10:28.288 "seek_hole": false, 00:10:28.288 "seek_data": false, 00:10:28.288 "copy": true, 00:10:28.288 "nvme_iov_md": false 00:10:28.288 }, 00:10:28.288 "memory_domains": [ 00:10:28.288 { 00:10:28.288 "dma_device_id": "system", 00:10:28.288 "dma_device_type": 1 00:10:28.288 }, 00:10:28.288 { 00:10:28.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.288 "dma_device_type": 2 00:10:28.288 } 00:10:28.288 ], 00:10:28.288 "driver_specific": {} 00:10:28.288 } 00:10:28.288 ] 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.288 "name": "Existed_Raid", 00:10:28.288 "uuid": "ab69a050-f145-4b68-b5e9-6298f1151426", 00:10:28.288 "strip_size_kb": 64, 00:10:28.288 "state": "configuring", 00:10:28.288 "raid_level": "concat", 00:10:28.288 "superblock": true, 00:10:28.288 "num_base_bdevs": 4, 00:10:28.288 "num_base_bdevs_discovered": 3, 00:10:28.288 "num_base_bdevs_operational": 4, 00:10:28.288 "base_bdevs_list": [ 00:10:28.288 { 00:10:28.288 "name": "BaseBdev1", 00:10:28.288 "uuid": "cc2ac67a-efec-4dfc-8bb3-53322ec1274a", 00:10:28.288 "is_configured": true, 00:10:28.288 "data_offset": 2048, 00:10:28.288 "data_size": 63488 00:10:28.288 }, 00:10:28.288 { 00:10:28.288 "name": null, 00:10:28.288 "uuid": "30492edb-9e67-4eda-8e5f-a1fee2743035", 00:10:28.288 "is_configured": false, 00:10:28.288 "data_offset": 0, 00:10:28.288 "data_size": 63488 00:10:28.288 }, 00:10:28.288 { 00:10:28.288 "name": "BaseBdev3", 00:10:28.288 "uuid": "faee3857-da58-4243-a961-bcdef2d71da4", 00:10:28.288 "is_configured": true, 00:10:28.288 "data_offset": 2048, 00:10:28.288 "data_size": 63488 00:10:28.288 }, 00:10:28.288 { 00:10:28.288 "name": "BaseBdev4", 00:10:28.288 "uuid": "9bf3542c-5857-4c0a-bf77-d702a61e8130", 00:10:28.288 "is_configured": true, 00:10:28.288 "data_offset": 2048, 00:10:28.288 "data_size": 63488 00:10:28.288 } 00:10:28.288 ] 00:10:28.288 }' 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.288 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.547 [2024-11-28 02:26:02.168800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.547 "name": "Existed_Raid", 00:10:28.547 "uuid": "ab69a050-f145-4b68-b5e9-6298f1151426", 00:10:28.547 "strip_size_kb": 64, 00:10:28.547 "state": "configuring", 00:10:28.547 "raid_level": "concat", 00:10:28.547 "superblock": true, 00:10:28.547 "num_base_bdevs": 4, 00:10:28.547 "num_base_bdevs_discovered": 2, 00:10:28.547 "num_base_bdevs_operational": 4, 00:10:28.547 "base_bdevs_list": [ 00:10:28.547 { 00:10:28.547 "name": "BaseBdev1", 00:10:28.547 "uuid": "cc2ac67a-efec-4dfc-8bb3-53322ec1274a", 00:10:28.547 "is_configured": true, 00:10:28.547 "data_offset": 2048, 00:10:28.547 "data_size": 63488 00:10:28.547 }, 00:10:28.547 { 00:10:28.547 "name": null, 00:10:28.547 "uuid": "30492edb-9e67-4eda-8e5f-a1fee2743035", 00:10:28.547 "is_configured": false, 00:10:28.547 "data_offset": 0, 00:10:28.547 "data_size": 63488 00:10:28.547 }, 00:10:28.547 { 00:10:28.547 "name": null, 00:10:28.547 "uuid": "faee3857-da58-4243-a961-bcdef2d71da4", 00:10:28.547 "is_configured": false, 00:10:28.547 "data_offset": 0, 00:10:28.547 "data_size": 63488 00:10:28.547 }, 00:10:28.547 { 00:10:28.547 "name": "BaseBdev4", 00:10:28.547 "uuid": "9bf3542c-5857-4c0a-bf77-d702a61e8130", 00:10:28.547 "is_configured": true, 00:10:28.547 "data_offset": 2048, 00:10:28.547 "data_size": 63488 00:10:28.547 } 00:10:28.547 ] 00:10:28.547 }' 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.547 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.116 [2024-11-28 02:26:02.663976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.116 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.116 "name": "Existed_Raid", 00:10:29.116 "uuid": "ab69a050-f145-4b68-b5e9-6298f1151426", 00:10:29.116 "strip_size_kb": 64, 00:10:29.117 "state": "configuring", 00:10:29.117 "raid_level": "concat", 00:10:29.117 "superblock": true, 00:10:29.117 "num_base_bdevs": 4, 00:10:29.117 "num_base_bdevs_discovered": 3, 00:10:29.117 "num_base_bdevs_operational": 4, 00:10:29.117 "base_bdevs_list": [ 00:10:29.117 { 00:10:29.117 "name": "BaseBdev1", 00:10:29.117 "uuid": "cc2ac67a-efec-4dfc-8bb3-53322ec1274a", 00:10:29.117 "is_configured": true, 00:10:29.117 "data_offset": 2048, 00:10:29.117 "data_size": 63488 00:10:29.117 }, 00:10:29.117 { 00:10:29.117 "name": null, 00:10:29.117 "uuid": "30492edb-9e67-4eda-8e5f-a1fee2743035", 00:10:29.117 "is_configured": false, 00:10:29.117 "data_offset": 0, 00:10:29.117 "data_size": 63488 00:10:29.117 }, 00:10:29.117 { 00:10:29.117 "name": "BaseBdev3", 00:10:29.117 "uuid": "faee3857-da58-4243-a961-bcdef2d71da4", 00:10:29.117 "is_configured": true, 00:10:29.117 "data_offset": 2048, 00:10:29.117 "data_size": 63488 00:10:29.117 }, 00:10:29.117 { 00:10:29.117 "name": "BaseBdev4", 00:10:29.117 "uuid": "9bf3542c-5857-4c0a-bf77-d702a61e8130", 00:10:29.117 "is_configured": true, 00:10:29.117 "data_offset": 2048, 00:10:29.117 "data_size": 63488 00:10:29.117 } 00:10:29.117 ] 00:10:29.117 }' 00:10:29.117 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.117 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.686 [2024-11-28 02:26:03.151209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.686 "name": "Existed_Raid", 00:10:29.686 "uuid": "ab69a050-f145-4b68-b5e9-6298f1151426", 00:10:29.686 "strip_size_kb": 64, 00:10:29.686 "state": "configuring", 00:10:29.686 "raid_level": "concat", 00:10:29.686 "superblock": true, 00:10:29.686 "num_base_bdevs": 4, 00:10:29.686 "num_base_bdevs_discovered": 2, 00:10:29.686 "num_base_bdevs_operational": 4, 00:10:29.686 "base_bdevs_list": [ 00:10:29.686 { 00:10:29.686 "name": null, 00:10:29.686 "uuid": "cc2ac67a-efec-4dfc-8bb3-53322ec1274a", 00:10:29.686 "is_configured": false, 00:10:29.686 "data_offset": 0, 00:10:29.686 "data_size": 63488 00:10:29.686 }, 00:10:29.686 { 00:10:29.686 "name": null, 00:10:29.686 "uuid": "30492edb-9e67-4eda-8e5f-a1fee2743035", 00:10:29.686 "is_configured": false, 00:10:29.686 "data_offset": 0, 00:10:29.686 "data_size": 63488 00:10:29.686 }, 00:10:29.686 { 00:10:29.686 "name": "BaseBdev3", 00:10:29.686 "uuid": "faee3857-da58-4243-a961-bcdef2d71da4", 00:10:29.686 "is_configured": true, 00:10:29.686 "data_offset": 2048, 00:10:29.686 "data_size": 63488 00:10:29.686 }, 00:10:29.686 { 00:10:29.686 "name": "BaseBdev4", 00:10:29.686 "uuid": "9bf3542c-5857-4c0a-bf77-d702a61e8130", 00:10:29.686 "is_configured": true, 00:10:29.686 "data_offset": 2048, 00:10:29.686 "data_size": 63488 00:10:29.686 } 00:10:29.686 ] 00:10:29.686 }' 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.686 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.255 [2024-11-28 02:26:03.745137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.255 "name": "Existed_Raid", 00:10:30.255 "uuid": "ab69a050-f145-4b68-b5e9-6298f1151426", 00:10:30.255 "strip_size_kb": 64, 00:10:30.255 "state": "configuring", 00:10:30.255 "raid_level": "concat", 00:10:30.255 "superblock": true, 00:10:30.255 "num_base_bdevs": 4, 00:10:30.255 "num_base_bdevs_discovered": 3, 00:10:30.255 "num_base_bdevs_operational": 4, 00:10:30.255 "base_bdevs_list": [ 00:10:30.255 { 00:10:30.255 "name": null, 00:10:30.255 "uuid": "cc2ac67a-efec-4dfc-8bb3-53322ec1274a", 00:10:30.255 "is_configured": false, 00:10:30.255 "data_offset": 0, 00:10:30.255 "data_size": 63488 00:10:30.255 }, 00:10:30.255 { 00:10:30.255 "name": "BaseBdev2", 00:10:30.255 "uuid": "30492edb-9e67-4eda-8e5f-a1fee2743035", 00:10:30.255 "is_configured": true, 00:10:30.255 "data_offset": 2048, 00:10:30.255 "data_size": 63488 00:10:30.255 }, 00:10:30.255 { 00:10:30.255 "name": "BaseBdev3", 00:10:30.255 "uuid": "faee3857-da58-4243-a961-bcdef2d71da4", 00:10:30.255 "is_configured": true, 00:10:30.255 "data_offset": 2048, 00:10:30.255 "data_size": 63488 00:10:30.255 }, 00:10:30.255 { 00:10:30.255 "name": "BaseBdev4", 00:10:30.255 "uuid": "9bf3542c-5857-4c0a-bf77-d702a61e8130", 00:10:30.255 "is_configured": true, 00:10:30.255 "data_offset": 2048, 00:10:30.255 "data_size": 63488 00:10:30.255 } 00:10:30.255 ] 00:10:30.255 }' 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.255 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.514 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.514 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.514 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.514 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cc2ac67a-efec-4dfc-8bb3-53322ec1274a 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.773 [2024-11-28 02:26:04.313138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:30.773 [2024-11-28 02:26:04.313389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:30.773 [2024-11-28 02:26:04.313403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:30.773 [2024-11-28 02:26:04.313685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:30.773 NewBaseBdev 00:10:30.773 [2024-11-28 02:26:04.313837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:30.773 [2024-11-28 02:26:04.313860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:30.773 [2024-11-28 02:26:04.314010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.773 [ 00:10:30.773 { 00:10:30.773 "name": "NewBaseBdev", 00:10:30.773 "aliases": [ 00:10:30.773 "cc2ac67a-efec-4dfc-8bb3-53322ec1274a" 00:10:30.773 ], 00:10:30.773 "product_name": "Malloc disk", 00:10:30.773 "block_size": 512, 00:10:30.773 "num_blocks": 65536, 00:10:30.773 "uuid": "cc2ac67a-efec-4dfc-8bb3-53322ec1274a", 00:10:30.773 "assigned_rate_limits": { 00:10:30.773 "rw_ios_per_sec": 0, 00:10:30.773 "rw_mbytes_per_sec": 0, 00:10:30.773 "r_mbytes_per_sec": 0, 00:10:30.773 "w_mbytes_per_sec": 0 00:10:30.773 }, 00:10:30.773 "claimed": true, 00:10:30.773 "claim_type": "exclusive_write", 00:10:30.773 "zoned": false, 00:10:30.773 "supported_io_types": { 00:10:30.773 "read": true, 00:10:30.773 "write": true, 00:10:30.773 "unmap": true, 00:10:30.773 "flush": true, 00:10:30.773 "reset": true, 00:10:30.773 "nvme_admin": false, 00:10:30.773 "nvme_io": false, 00:10:30.773 "nvme_io_md": false, 00:10:30.773 "write_zeroes": true, 00:10:30.773 "zcopy": true, 00:10:30.773 "get_zone_info": false, 00:10:30.773 "zone_management": false, 00:10:30.773 "zone_append": false, 00:10:30.773 "compare": false, 00:10:30.773 "compare_and_write": false, 00:10:30.773 "abort": true, 00:10:30.773 "seek_hole": false, 00:10:30.773 "seek_data": false, 00:10:30.773 "copy": true, 00:10:30.773 "nvme_iov_md": false 00:10:30.773 }, 00:10:30.773 "memory_domains": [ 00:10:30.773 { 00:10:30.773 "dma_device_id": "system", 00:10:30.773 "dma_device_type": 1 00:10:30.773 }, 00:10:30.773 { 00:10:30.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.773 "dma_device_type": 2 00:10:30.773 } 00:10:30.773 ], 00:10:30.773 "driver_specific": {} 00:10:30.773 } 00:10:30.773 ] 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.773 "name": "Existed_Raid", 00:10:30.773 "uuid": "ab69a050-f145-4b68-b5e9-6298f1151426", 00:10:30.773 "strip_size_kb": 64, 00:10:30.773 "state": "online", 00:10:30.773 "raid_level": "concat", 00:10:30.773 "superblock": true, 00:10:30.773 "num_base_bdevs": 4, 00:10:30.773 "num_base_bdevs_discovered": 4, 00:10:30.773 "num_base_bdevs_operational": 4, 00:10:30.773 "base_bdevs_list": [ 00:10:30.773 { 00:10:30.773 "name": "NewBaseBdev", 00:10:30.773 "uuid": "cc2ac67a-efec-4dfc-8bb3-53322ec1274a", 00:10:30.773 "is_configured": true, 00:10:30.773 "data_offset": 2048, 00:10:30.773 "data_size": 63488 00:10:30.773 }, 00:10:30.773 { 00:10:30.773 "name": "BaseBdev2", 00:10:30.773 "uuid": "30492edb-9e67-4eda-8e5f-a1fee2743035", 00:10:30.773 "is_configured": true, 00:10:30.773 "data_offset": 2048, 00:10:30.773 "data_size": 63488 00:10:30.773 }, 00:10:30.773 { 00:10:30.773 "name": "BaseBdev3", 00:10:30.773 "uuid": "faee3857-da58-4243-a961-bcdef2d71da4", 00:10:30.773 "is_configured": true, 00:10:30.773 "data_offset": 2048, 00:10:30.773 "data_size": 63488 00:10:30.773 }, 00:10:30.773 { 00:10:30.773 "name": "BaseBdev4", 00:10:30.773 "uuid": "9bf3542c-5857-4c0a-bf77-d702a61e8130", 00:10:30.773 "is_configured": true, 00:10:30.773 "data_offset": 2048, 00:10:30.773 "data_size": 63488 00:10:30.773 } 00:10:30.773 ] 00:10:30.773 }' 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.773 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:31.340 [2024-11-28 02:26:04.812776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.340 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:31.340 "name": "Existed_Raid", 00:10:31.340 "aliases": [ 00:10:31.340 "ab69a050-f145-4b68-b5e9-6298f1151426" 00:10:31.340 ], 00:10:31.340 "product_name": "Raid Volume", 00:10:31.340 "block_size": 512, 00:10:31.340 "num_blocks": 253952, 00:10:31.340 "uuid": "ab69a050-f145-4b68-b5e9-6298f1151426", 00:10:31.340 "assigned_rate_limits": { 00:10:31.340 "rw_ios_per_sec": 0, 00:10:31.340 "rw_mbytes_per_sec": 0, 00:10:31.340 "r_mbytes_per_sec": 0, 00:10:31.340 "w_mbytes_per_sec": 0 00:10:31.340 }, 00:10:31.340 "claimed": false, 00:10:31.340 "zoned": false, 00:10:31.340 "supported_io_types": { 00:10:31.340 "read": true, 00:10:31.340 "write": true, 00:10:31.340 "unmap": true, 00:10:31.340 "flush": true, 00:10:31.340 "reset": true, 00:10:31.340 "nvme_admin": false, 00:10:31.340 "nvme_io": false, 00:10:31.340 "nvme_io_md": false, 00:10:31.340 "write_zeroes": true, 00:10:31.340 "zcopy": false, 00:10:31.340 "get_zone_info": false, 00:10:31.340 "zone_management": false, 00:10:31.340 "zone_append": false, 00:10:31.340 "compare": false, 00:10:31.340 "compare_and_write": false, 00:10:31.340 "abort": false, 00:10:31.340 "seek_hole": false, 00:10:31.340 "seek_data": false, 00:10:31.340 "copy": false, 00:10:31.340 "nvme_iov_md": false 00:10:31.340 }, 00:10:31.340 "memory_domains": [ 00:10:31.340 { 00:10:31.340 "dma_device_id": "system", 00:10:31.340 "dma_device_type": 1 00:10:31.340 }, 00:10:31.340 { 00:10:31.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.340 "dma_device_type": 2 00:10:31.340 }, 00:10:31.340 { 00:10:31.340 "dma_device_id": "system", 00:10:31.340 "dma_device_type": 1 00:10:31.340 }, 00:10:31.340 { 00:10:31.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.340 "dma_device_type": 2 00:10:31.340 }, 00:10:31.340 { 00:10:31.340 "dma_device_id": "system", 00:10:31.340 "dma_device_type": 1 00:10:31.340 }, 00:10:31.340 { 00:10:31.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.340 "dma_device_type": 2 00:10:31.340 }, 00:10:31.340 { 00:10:31.340 "dma_device_id": "system", 00:10:31.340 "dma_device_type": 1 00:10:31.340 }, 00:10:31.340 { 00:10:31.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.341 "dma_device_type": 2 00:10:31.341 } 00:10:31.341 ], 00:10:31.341 "driver_specific": { 00:10:31.341 "raid": { 00:10:31.341 "uuid": "ab69a050-f145-4b68-b5e9-6298f1151426", 00:10:31.341 "strip_size_kb": 64, 00:10:31.341 "state": "online", 00:10:31.341 "raid_level": "concat", 00:10:31.341 "superblock": true, 00:10:31.341 "num_base_bdevs": 4, 00:10:31.341 "num_base_bdevs_discovered": 4, 00:10:31.341 "num_base_bdevs_operational": 4, 00:10:31.341 "base_bdevs_list": [ 00:10:31.341 { 00:10:31.341 "name": "NewBaseBdev", 00:10:31.341 "uuid": "cc2ac67a-efec-4dfc-8bb3-53322ec1274a", 00:10:31.341 "is_configured": true, 00:10:31.341 "data_offset": 2048, 00:10:31.341 "data_size": 63488 00:10:31.341 }, 00:10:31.341 { 00:10:31.341 "name": "BaseBdev2", 00:10:31.341 "uuid": "30492edb-9e67-4eda-8e5f-a1fee2743035", 00:10:31.341 "is_configured": true, 00:10:31.341 "data_offset": 2048, 00:10:31.341 "data_size": 63488 00:10:31.341 }, 00:10:31.341 { 00:10:31.341 "name": "BaseBdev3", 00:10:31.341 "uuid": "faee3857-da58-4243-a961-bcdef2d71da4", 00:10:31.341 "is_configured": true, 00:10:31.341 "data_offset": 2048, 00:10:31.341 "data_size": 63488 00:10:31.341 }, 00:10:31.341 { 00:10:31.341 "name": "BaseBdev4", 00:10:31.341 "uuid": "9bf3542c-5857-4c0a-bf77-d702a61e8130", 00:10:31.341 "is_configured": true, 00:10:31.341 "data_offset": 2048, 00:10:31.341 "data_size": 63488 00:10:31.341 } 00:10:31.341 ] 00:10:31.341 } 00:10:31.341 } 00:10:31.341 }' 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:31.341 BaseBdev2 00:10:31.341 BaseBdev3 00:10:31.341 BaseBdev4' 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.341 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.341 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.601 [2024-11-28 02:26:05.143858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.601 [2024-11-28 02:26:05.143967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.601 [2024-11-28 02:26:05.144092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.601 [2024-11-28 02:26:05.144198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.601 [2024-11-28 02:26:05.144246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71734 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71734 ']' 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71734 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71734 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.601 killing process with pid 71734 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71734' 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71734 00:10:31.601 [2024-11-28 02:26:05.190953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.601 02:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71734 00:10:32.168 [2024-11-28 02:26:05.578293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.105 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:33.105 00:10:33.105 real 0m11.342s 00:10:33.105 user 0m17.997s 00:10:33.105 sys 0m1.993s 00:10:33.105 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.105 ************************************ 00:10:33.105 END TEST raid_state_function_test_sb 00:10:33.105 ************************************ 00:10:33.105 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.105 02:26:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:33.105 02:26:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:33.105 02:26:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.105 02:26:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.105 ************************************ 00:10:33.105 START TEST raid_superblock_test 00:10:33.105 ************************************ 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72408 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72408 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72408 ']' 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.105 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.364 [2024-11-28 02:26:06.842557] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:33.364 [2024-11-28 02:26:06.842744] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72408 ] 00:10:33.364 [2024-11-28 02:26:07.016783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.624 [2024-11-28 02:26:07.131991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.884 [2024-11-28 02:26:07.323137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.884 [2024-11-28 02:26:07.323267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.143 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.144 malloc1 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.144 [2024-11-28 02:26:07.727347] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:34.144 [2024-11-28 02:26:07.727413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.144 [2024-11-28 02:26:07.727437] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:34.144 [2024-11-28 02:26:07.727449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.144 [2024-11-28 02:26:07.729526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.144 [2024-11-28 02:26:07.729569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:34.144 pt1 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.144 malloc2 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.144 [2024-11-28 02:26:07.780712] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:34.144 [2024-11-28 02:26:07.780822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.144 [2024-11-28 02:26:07.780872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:34.144 [2024-11-28 02:26:07.780915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.144 [2024-11-28 02:26:07.782982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.144 [2024-11-28 02:26:07.783062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:34.144 pt2 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.144 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.404 malloc3 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.404 [2024-11-28 02:26:07.851766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:34.404 [2024-11-28 02:26:07.851870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.404 [2024-11-28 02:26:07.851916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:34.404 [2024-11-28 02:26:07.851969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.404 [2024-11-28 02:26:07.854169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.404 [2024-11-28 02:26:07.854253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:34.404 pt3 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:34.404 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.405 malloc4 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.405 [2024-11-28 02:26:07.913291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:34.405 [2024-11-28 02:26:07.913379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.405 [2024-11-28 02:26:07.913405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:34.405 [2024-11-28 02:26:07.913417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.405 [2024-11-28 02:26:07.915565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.405 [2024-11-28 02:26:07.915708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:34.405 pt4 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.405 [2024-11-28 02:26:07.925318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:34.405 [2024-11-28 02:26:07.927188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:34.405 [2024-11-28 02:26:07.927303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:34.405 [2024-11-28 02:26:07.927356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:34.405 [2024-11-28 02:26:07.927547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:34.405 [2024-11-28 02:26:07.927559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:34.405 [2024-11-28 02:26:07.927857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:34.405 [2024-11-28 02:26:07.928062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:34.405 [2024-11-28 02:26:07.928079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:34.405 [2024-11-28 02:26:07.928246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.405 "name": "raid_bdev1", 00:10:34.405 "uuid": "dacd6d50-3003-4750-b5b3-d12d18f15edf", 00:10:34.405 "strip_size_kb": 64, 00:10:34.405 "state": "online", 00:10:34.405 "raid_level": "concat", 00:10:34.405 "superblock": true, 00:10:34.405 "num_base_bdevs": 4, 00:10:34.405 "num_base_bdevs_discovered": 4, 00:10:34.405 "num_base_bdevs_operational": 4, 00:10:34.405 "base_bdevs_list": [ 00:10:34.405 { 00:10:34.405 "name": "pt1", 00:10:34.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.405 "is_configured": true, 00:10:34.405 "data_offset": 2048, 00:10:34.405 "data_size": 63488 00:10:34.405 }, 00:10:34.405 { 00:10:34.405 "name": "pt2", 00:10:34.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.405 "is_configured": true, 00:10:34.405 "data_offset": 2048, 00:10:34.405 "data_size": 63488 00:10:34.405 }, 00:10:34.405 { 00:10:34.405 "name": "pt3", 00:10:34.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.405 "is_configured": true, 00:10:34.405 "data_offset": 2048, 00:10:34.405 "data_size": 63488 00:10:34.405 }, 00:10:34.405 { 00:10:34.405 "name": "pt4", 00:10:34.405 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:34.405 "is_configured": true, 00:10:34.405 "data_offset": 2048, 00:10:34.405 "data_size": 63488 00:10:34.405 } 00:10:34.405 ] 00:10:34.405 }' 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.405 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.975 [2024-11-28 02:26:08.380830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.975 "name": "raid_bdev1", 00:10:34.975 "aliases": [ 00:10:34.975 "dacd6d50-3003-4750-b5b3-d12d18f15edf" 00:10:34.975 ], 00:10:34.975 "product_name": "Raid Volume", 00:10:34.975 "block_size": 512, 00:10:34.975 "num_blocks": 253952, 00:10:34.975 "uuid": "dacd6d50-3003-4750-b5b3-d12d18f15edf", 00:10:34.975 "assigned_rate_limits": { 00:10:34.975 "rw_ios_per_sec": 0, 00:10:34.975 "rw_mbytes_per_sec": 0, 00:10:34.975 "r_mbytes_per_sec": 0, 00:10:34.975 "w_mbytes_per_sec": 0 00:10:34.975 }, 00:10:34.975 "claimed": false, 00:10:34.975 "zoned": false, 00:10:34.975 "supported_io_types": { 00:10:34.975 "read": true, 00:10:34.975 "write": true, 00:10:34.975 "unmap": true, 00:10:34.975 "flush": true, 00:10:34.975 "reset": true, 00:10:34.975 "nvme_admin": false, 00:10:34.975 "nvme_io": false, 00:10:34.975 "nvme_io_md": false, 00:10:34.975 "write_zeroes": true, 00:10:34.975 "zcopy": false, 00:10:34.975 "get_zone_info": false, 00:10:34.975 "zone_management": false, 00:10:34.975 "zone_append": false, 00:10:34.975 "compare": false, 00:10:34.975 "compare_and_write": false, 00:10:34.975 "abort": false, 00:10:34.975 "seek_hole": false, 00:10:34.975 "seek_data": false, 00:10:34.975 "copy": false, 00:10:34.975 "nvme_iov_md": false 00:10:34.975 }, 00:10:34.975 "memory_domains": [ 00:10:34.975 { 00:10:34.975 "dma_device_id": "system", 00:10:34.975 "dma_device_type": 1 00:10:34.975 }, 00:10:34.975 { 00:10:34.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.975 "dma_device_type": 2 00:10:34.975 }, 00:10:34.975 { 00:10:34.975 "dma_device_id": "system", 00:10:34.975 "dma_device_type": 1 00:10:34.975 }, 00:10:34.975 { 00:10:34.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.975 "dma_device_type": 2 00:10:34.975 }, 00:10:34.975 { 00:10:34.975 "dma_device_id": "system", 00:10:34.975 "dma_device_type": 1 00:10:34.975 }, 00:10:34.975 { 00:10:34.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.975 "dma_device_type": 2 00:10:34.975 }, 00:10:34.975 { 00:10:34.975 "dma_device_id": "system", 00:10:34.975 "dma_device_type": 1 00:10:34.975 }, 00:10:34.975 { 00:10:34.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.975 "dma_device_type": 2 00:10:34.975 } 00:10:34.975 ], 00:10:34.975 "driver_specific": { 00:10:34.975 "raid": { 00:10:34.975 "uuid": "dacd6d50-3003-4750-b5b3-d12d18f15edf", 00:10:34.975 "strip_size_kb": 64, 00:10:34.975 "state": "online", 00:10:34.975 "raid_level": "concat", 00:10:34.975 "superblock": true, 00:10:34.975 "num_base_bdevs": 4, 00:10:34.975 "num_base_bdevs_discovered": 4, 00:10:34.975 "num_base_bdevs_operational": 4, 00:10:34.975 "base_bdevs_list": [ 00:10:34.975 { 00:10:34.975 "name": "pt1", 00:10:34.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.975 "is_configured": true, 00:10:34.975 "data_offset": 2048, 00:10:34.975 "data_size": 63488 00:10:34.975 }, 00:10:34.975 { 00:10:34.975 "name": "pt2", 00:10:34.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.975 "is_configured": true, 00:10:34.975 "data_offset": 2048, 00:10:34.975 "data_size": 63488 00:10:34.975 }, 00:10:34.975 { 00:10:34.975 "name": "pt3", 00:10:34.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.975 "is_configured": true, 00:10:34.975 "data_offset": 2048, 00:10:34.975 "data_size": 63488 00:10:34.975 }, 00:10:34.975 { 00:10:34.975 "name": "pt4", 00:10:34.975 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:34.975 "is_configured": true, 00:10:34.975 "data_offset": 2048, 00:10:34.975 "data_size": 63488 00:10:34.975 } 00:10:34.975 ] 00:10:34.975 } 00:10:34.975 } 00:10:34.975 }' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:34.975 pt2 00:10:34.975 pt3 00:10:34.975 pt4' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.975 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 [2024-11-28 02:26:08.688307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dacd6d50-3003-4750-b5b3-d12d18f15edf 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dacd6d50-3003-4750-b5b3-d12d18f15edf ']' 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.236 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 [2024-11-28 02:26:08.731942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.236 [2024-11-28 02:26:08.731978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.237 [2024-11-28 02:26:08.732081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.237 [2024-11-28 02:26:08.732155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.237 [2024-11-28 02:26:08.732170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.237 [2024-11-28 02:26:08.891665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:35.237 [2024-11-28 02:26:08.893499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:35.237 [2024-11-28 02:26:08.893548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:35.237 [2024-11-28 02:26:08.893584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:35.237 [2024-11-28 02:26:08.893639] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:35.237 [2024-11-28 02:26:08.893695] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:35.237 [2024-11-28 02:26:08.893716] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:35.237 [2024-11-28 02:26:08.893738] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:35.237 [2024-11-28 02:26:08.893754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.237 [2024-11-28 02:26:08.893766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:35.237 request: 00:10:35.237 { 00:10:35.237 "name": "raid_bdev1", 00:10:35.237 "raid_level": "concat", 00:10:35.237 "base_bdevs": [ 00:10:35.237 "malloc1", 00:10:35.237 "malloc2", 00:10:35.237 "malloc3", 00:10:35.237 "malloc4" 00:10:35.237 ], 00:10:35.237 "strip_size_kb": 64, 00:10:35.237 "superblock": false, 00:10:35.237 "method": "bdev_raid_create", 00:10:35.237 "req_id": 1 00:10:35.237 } 00:10:35.237 Got JSON-RPC error response 00:10:35.237 response: 00:10:35.237 { 00:10:35.237 "code": -17, 00:10:35.237 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:35.237 } 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.237 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.498 [2024-11-28 02:26:08.943528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:35.498 [2024-11-28 02:26:08.943645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.498 [2024-11-28 02:26:08.943688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:35.498 [2024-11-28 02:26:08.943726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.498 [2024-11-28 02:26:08.945974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.498 [2024-11-28 02:26:08.946060] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:35.498 [2024-11-28 02:26:08.946168] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:35.498 [2024-11-28 02:26:08.946246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:35.498 pt1 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.498 "name": "raid_bdev1", 00:10:35.498 "uuid": "dacd6d50-3003-4750-b5b3-d12d18f15edf", 00:10:35.498 "strip_size_kb": 64, 00:10:35.498 "state": "configuring", 00:10:35.498 "raid_level": "concat", 00:10:35.498 "superblock": true, 00:10:35.498 "num_base_bdevs": 4, 00:10:35.498 "num_base_bdevs_discovered": 1, 00:10:35.498 "num_base_bdevs_operational": 4, 00:10:35.498 "base_bdevs_list": [ 00:10:35.498 { 00:10:35.498 "name": "pt1", 00:10:35.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.498 "is_configured": true, 00:10:35.498 "data_offset": 2048, 00:10:35.498 "data_size": 63488 00:10:35.498 }, 00:10:35.498 { 00:10:35.498 "name": null, 00:10:35.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.498 "is_configured": false, 00:10:35.498 "data_offset": 2048, 00:10:35.498 "data_size": 63488 00:10:35.498 }, 00:10:35.498 { 00:10:35.498 "name": null, 00:10:35.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:35.498 "is_configured": false, 00:10:35.498 "data_offset": 2048, 00:10:35.498 "data_size": 63488 00:10:35.498 }, 00:10:35.498 { 00:10:35.498 "name": null, 00:10:35.498 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:35.498 "is_configured": false, 00:10:35.498 "data_offset": 2048, 00:10:35.498 "data_size": 63488 00:10:35.498 } 00:10:35.498 ] 00:10:35.498 }' 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.498 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.760 [2024-11-28 02:26:09.382874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:35.760 [2024-11-28 02:26:09.382975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.760 [2024-11-28 02:26:09.382999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:35.760 [2024-11-28 02:26:09.383013] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.760 [2024-11-28 02:26:09.383480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.760 [2024-11-28 02:26:09.383511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:35.760 [2024-11-28 02:26:09.383618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:35.760 [2024-11-28 02:26:09.383647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:35.760 pt2 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.760 [2024-11-28 02:26:09.394835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.760 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.022 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.022 "name": "raid_bdev1", 00:10:36.022 "uuid": "dacd6d50-3003-4750-b5b3-d12d18f15edf", 00:10:36.022 "strip_size_kb": 64, 00:10:36.022 "state": "configuring", 00:10:36.022 "raid_level": "concat", 00:10:36.022 "superblock": true, 00:10:36.022 "num_base_bdevs": 4, 00:10:36.022 "num_base_bdevs_discovered": 1, 00:10:36.022 "num_base_bdevs_operational": 4, 00:10:36.022 "base_bdevs_list": [ 00:10:36.022 { 00:10:36.022 "name": "pt1", 00:10:36.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.022 "is_configured": true, 00:10:36.022 "data_offset": 2048, 00:10:36.022 "data_size": 63488 00:10:36.022 }, 00:10:36.022 { 00:10:36.022 "name": null, 00:10:36.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.022 "is_configured": false, 00:10:36.022 "data_offset": 0, 00:10:36.022 "data_size": 63488 00:10:36.022 }, 00:10:36.022 { 00:10:36.022 "name": null, 00:10:36.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.022 "is_configured": false, 00:10:36.022 "data_offset": 2048, 00:10:36.022 "data_size": 63488 00:10:36.022 }, 00:10:36.022 { 00:10:36.022 "name": null, 00:10:36.022 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:36.022 "is_configured": false, 00:10:36.022 "data_offset": 2048, 00:10:36.022 "data_size": 63488 00:10:36.022 } 00:10:36.022 ] 00:10:36.022 }' 00:10:36.022 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.022 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.283 [2024-11-28 02:26:09.878104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:36.283 [2024-11-28 02:26:09.878232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.283 [2024-11-28 02:26:09.878277] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:36.283 [2024-11-28 02:26:09.878317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.283 [2024-11-28 02:26:09.878821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.283 [2024-11-28 02:26:09.878892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:36.283 [2024-11-28 02:26:09.879043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:36.283 [2024-11-28 02:26:09.879108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:36.283 pt2 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.283 [2024-11-28 02:26:09.890058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:36.283 [2024-11-28 02:26:09.890159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.283 [2024-11-28 02:26:09.890201] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:36.283 [2024-11-28 02:26:09.890234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.283 [2024-11-28 02:26:09.890686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.283 [2024-11-28 02:26:09.890767] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:36.283 [2024-11-28 02:26:09.890882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:36.283 [2024-11-28 02:26:09.890974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:36.283 pt3 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.283 [2024-11-28 02:26:09.902011] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:36.283 [2024-11-28 02:26:09.902104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.283 [2024-11-28 02:26:09.902142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:36.283 [2024-11-28 02:26:09.902179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.283 [2024-11-28 02:26:09.902583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.283 [2024-11-28 02:26:09.902606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:36.283 [2024-11-28 02:26:09.902674] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:36.283 [2024-11-28 02:26:09.902696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:36.283 [2024-11-28 02:26:09.902833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:36.283 [2024-11-28 02:26:09.902842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:36.283 [2024-11-28 02:26:09.903114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:36.283 [2024-11-28 02:26:09.903288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:36.283 [2024-11-28 02:26:09.903311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:36.283 [2024-11-28 02:26:09.903455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.283 pt4 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.283 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.283 "name": "raid_bdev1", 00:10:36.283 "uuid": "dacd6d50-3003-4750-b5b3-d12d18f15edf", 00:10:36.283 "strip_size_kb": 64, 00:10:36.283 "state": "online", 00:10:36.283 "raid_level": "concat", 00:10:36.283 "superblock": true, 00:10:36.283 "num_base_bdevs": 4, 00:10:36.283 "num_base_bdevs_discovered": 4, 00:10:36.283 "num_base_bdevs_operational": 4, 00:10:36.283 "base_bdevs_list": [ 00:10:36.283 { 00:10:36.283 "name": "pt1", 00:10:36.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.283 "is_configured": true, 00:10:36.283 "data_offset": 2048, 00:10:36.283 "data_size": 63488 00:10:36.283 }, 00:10:36.283 { 00:10:36.283 "name": "pt2", 00:10:36.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.283 "is_configured": true, 00:10:36.283 "data_offset": 2048, 00:10:36.283 "data_size": 63488 00:10:36.283 }, 00:10:36.283 { 00:10:36.283 "name": "pt3", 00:10:36.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.284 "is_configured": true, 00:10:36.284 "data_offset": 2048, 00:10:36.284 "data_size": 63488 00:10:36.284 }, 00:10:36.284 { 00:10:36.284 "name": "pt4", 00:10:36.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:36.284 "is_configured": true, 00:10:36.284 "data_offset": 2048, 00:10:36.284 "data_size": 63488 00:10:36.284 } 00:10:36.284 ] 00:10:36.284 }' 00:10:36.284 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.284 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.854 [2024-11-28 02:26:10.353585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.854 "name": "raid_bdev1", 00:10:36.854 "aliases": [ 00:10:36.854 "dacd6d50-3003-4750-b5b3-d12d18f15edf" 00:10:36.854 ], 00:10:36.854 "product_name": "Raid Volume", 00:10:36.854 "block_size": 512, 00:10:36.854 "num_blocks": 253952, 00:10:36.854 "uuid": "dacd6d50-3003-4750-b5b3-d12d18f15edf", 00:10:36.854 "assigned_rate_limits": { 00:10:36.854 "rw_ios_per_sec": 0, 00:10:36.854 "rw_mbytes_per_sec": 0, 00:10:36.854 "r_mbytes_per_sec": 0, 00:10:36.854 "w_mbytes_per_sec": 0 00:10:36.854 }, 00:10:36.854 "claimed": false, 00:10:36.854 "zoned": false, 00:10:36.854 "supported_io_types": { 00:10:36.854 "read": true, 00:10:36.854 "write": true, 00:10:36.854 "unmap": true, 00:10:36.854 "flush": true, 00:10:36.854 "reset": true, 00:10:36.854 "nvme_admin": false, 00:10:36.854 "nvme_io": false, 00:10:36.854 "nvme_io_md": false, 00:10:36.854 "write_zeroes": true, 00:10:36.854 "zcopy": false, 00:10:36.854 "get_zone_info": false, 00:10:36.854 "zone_management": false, 00:10:36.854 "zone_append": false, 00:10:36.854 "compare": false, 00:10:36.854 "compare_and_write": false, 00:10:36.854 "abort": false, 00:10:36.854 "seek_hole": false, 00:10:36.854 "seek_data": false, 00:10:36.854 "copy": false, 00:10:36.854 "nvme_iov_md": false 00:10:36.854 }, 00:10:36.854 "memory_domains": [ 00:10:36.854 { 00:10:36.854 "dma_device_id": "system", 00:10:36.854 "dma_device_type": 1 00:10:36.854 }, 00:10:36.854 { 00:10:36.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.854 "dma_device_type": 2 00:10:36.854 }, 00:10:36.854 { 00:10:36.854 "dma_device_id": "system", 00:10:36.854 "dma_device_type": 1 00:10:36.854 }, 00:10:36.854 { 00:10:36.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.854 "dma_device_type": 2 00:10:36.854 }, 00:10:36.854 { 00:10:36.854 "dma_device_id": "system", 00:10:36.854 "dma_device_type": 1 00:10:36.854 }, 00:10:36.854 { 00:10:36.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.854 "dma_device_type": 2 00:10:36.854 }, 00:10:36.854 { 00:10:36.854 "dma_device_id": "system", 00:10:36.854 "dma_device_type": 1 00:10:36.854 }, 00:10:36.854 { 00:10:36.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.854 "dma_device_type": 2 00:10:36.854 } 00:10:36.854 ], 00:10:36.854 "driver_specific": { 00:10:36.854 "raid": { 00:10:36.854 "uuid": "dacd6d50-3003-4750-b5b3-d12d18f15edf", 00:10:36.854 "strip_size_kb": 64, 00:10:36.854 "state": "online", 00:10:36.854 "raid_level": "concat", 00:10:36.854 "superblock": true, 00:10:36.854 "num_base_bdevs": 4, 00:10:36.854 "num_base_bdevs_discovered": 4, 00:10:36.854 "num_base_bdevs_operational": 4, 00:10:36.854 "base_bdevs_list": [ 00:10:36.854 { 00:10:36.854 "name": "pt1", 00:10:36.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.854 "is_configured": true, 00:10:36.854 "data_offset": 2048, 00:10:36.854 "data_size": 63488 00:10:36.854 }, 00:10:36.854 { 00:10:36.854 "name": "pt2", 00:10:36.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.854 "is_configured": true, 00:10:36.854 "data_offset": 2048, 00:10:36.854 "data_size": 63488 00:10:36.854 }, 00:10:36.854 { 00:10:36.854 "name": "pt3", 00:10:36.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.854 "is_configured": true, 00:10:36.854 "data_offset": 2048, 00:10:36.854 "data_size": 63488 00:10:36.854 }, 00:10:36.854 { 00:10:36.854 "name": "pt4", 00:10:36.854 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:36.854 "is_configured": true, 00:10:36.854 "data_offset": 2048, 00:10:36.854 "data_size": 63488 00:10:36.854 } 00:10:36.854 ] 00:10:36.854 } 00:10:36.854 } 00:10:36.854 }' 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:36.854 pt2 00:10:36.854 pt3 00:10:36.854 pt4' 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.854 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.855 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.855 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:37.115 [2024-11-28 02:26:10.684943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dacd6d50-3003-4750-b5b3-d12d18f15edf '!=' dacd6d50-3003-4750-b5b3-d12d18f15edf ']' 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72408 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72408 ']' 00:10:37.115 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72408 00:10:37.116 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:37.116 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.116 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72408 00:10:37.116 killing process with pid 72408 00:10:37.116 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.116 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.116 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72408' 00:10:37.116 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72408 00:10:37.116 [2024-11-28 02:26:10.760422] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.116 [2024-11-28 02:26:10.760519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.116 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72408 00:10:37.116 [2024-11-28 02:26:10.760594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.116 [2024-11-28 02:26:10.760605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:37.685 [2024-11-28 02:26:11.140757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.625 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:38.625 ************************************ 00:10:38.625 END TEST raid_superblock_test 00:10:38.625 ************************************ 00:10:38.625 00:10:38.625 real 0m5.482s 00:10:38.625 user 0m7.855s 00:10:38.625 sys 0m0.957s 00:10:38.625 02:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.625 02:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.625 02:26:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:38.625 02:26:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.625 02:26:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.625 02:26:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.885 ************************************ 00:10:38.885 START TEST raid_read_error_test 00:10:38.885 ************************************ 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZagXnnfZWx 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72677 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72677 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72677 ']' 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.885 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.885 [2024-11-28 02:26:12.412568] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:38.885 [2024-11-28 02:26:12.412756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72677 ] 00:10:39.145 [2024-11-28 02:26:12.564068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.145 [2024-11-28 02:26:12.673431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.405 [2024-11-28 02:26:12.857490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.405 [2024-11-28 02:26:12.857660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.665 BaseBdev1_malloc 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.665 true 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.665 [2024-11-28 02:26:13.297290] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:39.665 [2024-11-28 02:26:13.297353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.665 [2024-11-28 02:26:13.297373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:39.665 [2024-11-28 02:26:13.297386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.665 [2024-11-28 02:26:13.299455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.665 [2024-11-28 02:26:13.299504] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:39.665 BaseBdev1 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.665 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 BaseBdev2_malloc 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 true 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 [2024-11-28 02:26:13.365549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:39.926 [2024-11-28 02:26:13.365610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.926 [2024-11-28 02:26:13.365628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:39.926 [2024-11-28 02:26:13.365641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.926 [2024-11-28 02:26:13.367708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.926 [2024-11-28 02:26:13.367754] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:39.926 BaseBdev2 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 BaseBdev3_malloc 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 true 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 [2024-11-28 02:26:13.444340] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:39.926 [2024-11-28 02:26:13.444399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.926 [2024-11-28 02:26:13.444418] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:39.926 [2024-11-28 02:26:13.444431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.926 [2024-11-28 02:26:13.446493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.926 [2024-11-28 02:26:13.446541] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:39.926 BaseBdev3 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 BaseBdev4_malloc 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 true 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 [2024-11-28 02:26:13.509876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:39.926 [2024-11-28 02:26:13.509952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.926 [2024-11-28 02:26:13.509974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:39.926 [2024-11-28 02:26:13.509987] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.926 [2024-11-28 02:26:13.512107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.926 [2024-11-28 02:26:13.512153] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:39.926 BaseBdev4 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 [2024-11-28 02:26:13.521933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.926 [2024-11-28 02:26:13.523699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.926 [2024-11-28 02:26:13.523780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.926 [2024-11-28 02:26:13.523846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.926 [2024-11-28 02:26:13.524095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:39.926 [2024-11-28 02:26:13.524115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:39.926 [2024-11-28 02:26:13.524372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:39.926 [2024-11-28 02:26:13.524558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:39.926 [2024-11-28 02:26:13.524570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:39.926 [2024-11-28 02:26:13.524739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.926 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.927 "name": "raid_bdev1", 00:10:39.927 "uuid": "82ad15ca-91fc-4919-aa66-de1a3a8dde18", 00:10:39.927 "strip_size_kb": 64, 00:10:39.927 "state": "online", 00:10:39.927 "raid_level": "concat", 00:10:39.927 "superblock": true, 00:10:39.927 "num_base_bdevs": 4, 00:10:39.927 "num_base_bdevs_discovered": 4, 00:10:39.927 "num_base_bdevs_operational": 4, 00:10:39.927 "base_bdevs_list": [ 00:10:39.927 { 00:10:39.927 "name": "BaseBdev1", 00:10:39.927 "uuid": "3101b9b8-9df2-517c-a7dc-50e96233ba8f", 00:10:39.927 "is_configured": true, 00:10:39.927 "data_offset": 2048, 00:10:39.927 "data_size": 63488 00:10:39.927 }, 00:10:39.927 { 00:10:39.927 "name": "BaseBdev2", 00:10:39.927 "uuid": "0433ce21-5eb2-566c-8f6f-77f69f350cf2", 00:10:39.927 "is_configured": true, 00:10:39.927 "data_offset": 2048, 00:10:39.927 "data_size": 63488 00:10:39.927 }, 00:10:39.927 { 00:10:39.927 "name": "BaseBdev3", 00:10:39.927 "uuid": "d1b1d93e-000b-5225-ae38-e3b144326cb7", 00:10:39.927 "is_configured": true, 00:10:39.927 "data_offset": 2048, 00:10:39.927 "data_size": 63488 00:10:39.927 }, 00:10:39.927 { 00:10:39.927 "name": "BaseBdev4", 00:10:39.927 "uuid": "dab540dc-a2e8-556c-a96f-f08e1a6fa1dc", 00:10:39.927 "is_configured": true, 00:10:39.927 "data_offset": 2048, 00:10:39.927 "data_size": 63488 00:10:39.927 } 00:10:39.927 ] 00:10:39.927 }' 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.927 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.497 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:40.497 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:40.497 [2024-11-28 02:26:14.098383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.437 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.438 "name": "raid_bdev1", 00:10:41.438 "uuid": "82ad15ca-91fc-4919-aa66-de1a3a8dde18", 00:10:41.438 "strip_size_kb": 64, 00:10:41.438 "state": "online", 00:10:41.438 "raid_level": "concat", 00:10:41.438 "superblock": true, 00:10:41.438 "num_base_bdevs": 4, 00:10:41.438 "num_base_bdevs_discovered": 4, 00:10:41.438 "num_base_bdevs_operational": 4, 00:10:41.438 "base_bdevs_list": [ 00:10:41.438 { 00:10:41.438 "name": "BaseBdev1", 00:10:41.438 "uuid": "3101b9b8-9df2-517c-a7dc-50e96233ba8f", 00:10:41.438 "is_configured": true, 00:10:41.438 "data_offset": 2048, 00:10:41.438 "data_size": 63488 00:10:41.438 }, 00:10:41.438 { 00:10:41.438 "name": "BaseBdev2", 00:10:41.438 "uuid": "0433ce21-5eb2-566c-8f6f-77f69f350cf2", 00:10:41.438 "is_configured": true, 00:10:41.438 "data_offset": 2048, 00:10:41.438 "data_size": 63488 00:10:41.438 }, 00:10:41.438 { 00:10:41.438 "name": "BaseBdev3", 00:10:41.438 "uuid": "d1b1d93e-000b-5225-ae38-e3b144326cb7", 00:10:41.438 "is_configured": true, 00:10:41.438 "data_offset": 2048, 00:10:41.438 "data_size": 63488 00:10:41.438 }, 00:10:41.438 { 00:10:41.438 "name": "BaseBdev4", 00:10:41.438 "uuid": "dab540dc-a2e8-556c-a96f-f08e1a6fa1dc", 00:10:41.438 "is_configured": true, 00:10:41.438 "data_offset": 2048, 00:10:41.438 "data_size": 63488 00:10:41.438 } 00:10:41.438 ] 00:10:41.438 }' 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.438 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.006 [2024-11-28 02:26:15.434279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.006 [2024-11-28 02:26:15.434395] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.006 [2024-11-28 02:26:15.437106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.006 [2024-11-28 02:26:15.437217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.006 [2024-11-28 02:26:15.437287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.006 [2024-11-28 02:26:15.437347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:42.006 { 00:10:42.006 "results": [ 00:10:42.006 { 00:10:42.006 "job": "raid_bdev1", 00:10:42.006 "core_mask": "0x1", 00:10:42.006 "workload": "randrw", 00:10:42.006 "percentage": 50, 00:10:42.006 "status": "finished", 00:10:42.006 "queue_depth": 1, 00:10:42.006 "io_size": 131072, 00:10:42.006 "runtime": 1.3368, 00:10:42.006 "iops": 15054.60801915021, 00:10:42.006 "mibps": 1881.8260023937762, 00:10:42.006 "io_failed": 1, 00:10:42.006 "io_timeout": 0, 00:10:42.006 "avg_latency_us": 91.9787119314259, 00:10:42.006 "min_latency_us": 26.382532751091702, 00:10:42.006 "max_latency_us": 1387.989519650655 00:10:42.006 } 00:10:42.006 ], 00:10:42.006 "core_count": 1 00:10:42.006 } 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72677 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72677 ']' 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72677 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72677 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72677' 00:10:42.006 killing process with pid 72677 00:10:42.006 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72677 00:10:42.006 [2024-11-28 02:26:15.470156] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.007 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72677 00:10:42.266 [2024-11-28 02:26:15.788434] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.649 02:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZagXnnfZWx 00:10:43.649 02:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:43.649 02:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:43.649 02:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:43.649 02:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:43.649 ************************************ 00:10:43.649 END TEST raid_read_error_test 00:10:43.649 ************************************ 00:10:43.649 02:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.649 02:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:43.649 02:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:43.649 00:10:43.649 real 0m4.651s 00:10:43.649 user 0m5.445s 00:10:43.649 sys 0m0.609s 00:10:43.649 02:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.649 02:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.650 02:26:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:43.650 02:26:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:43.650 02:26:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.650 02:26:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.650 ************************************ 00:10:43.650 START TEST raid_write_error_test 00:10:43.650 ************************************ 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4NKheqehmZ 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72817 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72817 00:10:43.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72817 ']' 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.650 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.650 [2024-11-28 02:26:17.135046] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:43.650 [2024-11-28 02:26:17.135160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72817 ] 00:10:43.650 [2024-11-28 02:26:17.310136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.910 [2024-11-28 02:26:17.424028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.170 [2024-11-28 02:26:17.629424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.170 [2024-11-28 02:26:17.629594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.430 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.430 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:44.430 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.430 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:44.430 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.430 02:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.430 BaseBdev1_malloc 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.430 true 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.430 [2024-11-28 02:26:18.022566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:44.430 [2024-11-28 02:26:18.022678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.430 [2024-11-28 02:26:18.022705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:44.430 [2024-11-28 02:26:18.022719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.430 [2024-11-28 02:26:18.024795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.430 [2024-11-28 02:26:18.024850] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:44.430 BaseBdev1 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.430 BaseBdev2_malloc 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.430 true 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.430 [2024-11-28 02:26:18.088812] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:44.430 [2024-11-28 02:26:18.088887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.430 [2024-11-28 02:26:18.088906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:44.430 [2024-11-28 02:26:18.088939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.430 [2024-11-28 02:26:18.090999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.430 [2024-11-28 02:26:18.091055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:44.430 BaseBdev2 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.430 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.691 BaseBdev3_malloc 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.691 true 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.691 [2024-11-28 02:26:18.166196] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:44.691 [2024-11-28 02:26:18.166260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.691 [2024-11-28 02:26:18.166281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:44.691 [2024-11-28 02:26:18.166294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.691 [2024-11-28 02:26:18.168385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.691 [2024-11-28 02:26:18.168434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:44.691 BaseBdev3 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.691 BaseBdev4_malloc 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.691 true 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.691 [2024-11-28 02:26:18.231891] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:44.691 [2024-11-28 02:26:18.231981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.691 [2024-11-28 02:26:18.232007] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:44.691 [2024-11-28 02:26:18.232021] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.691 [2024-11-28 02:26:18.234275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.691 [2024-11-28 02:26:18.234323] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:44.691 BaseBdev4 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.691 [2024-11-28 02:26:18.243949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.691 [2024-11-28 02:26:18.245752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.691 [2024-11-28 02:26:18.245840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.691 [2024-11-28 02:26:18.245907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.691 [2024-11-28 02:26:18.246172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:44.691 [2024-11-28 02:26:18.246191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.691 [2024-11-28 02:26:18.246465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:44.691 [2024-11-28 02:26:18.246659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:44.691 [2024-11-28 02:26:18.246671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:44.691 [2024-11-28 02:26:18.246863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.691 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.691 "name": "raid_bdev1", 00:10:44.691 "uuid": "d706b14c-b8f0-4132-9f34-939cbc480f10", 00:10:44.691 "strip_size_kb": 64, 00:10:44.691 "state": "online", 00:10:44.691 "raid_level": "concat", 00:10:44.691 "superblock": true, 00:10:44.691 "num_base_bdevs": 4, 00:10:44.691 "num_base_bdevs_discovered": 4, 00:10:44.691 "num_base_bdevs_operational": 4, 00:10:44.691 "base_bdevs_list": [ 00:10:44.691 { 00:10:44.691 "name": "BaseBdev1", 00:10:44.692 "uuid": "be828392-435f-548b-bec3-a583f1699b58", 00:10:44.692 "is_configured": true, 00:10:44.692 "data_offset": 2048, 00:10:44.692 "data_size": 63488 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "name": "BaseBdev2", 00:10:44.692 "uuid": "9c6648be-e6ca-50d2-9078-f90de0897803", 00:10:44.692 "is_configured": true, 00:10:44.692 "data_offset": 2048, 00:10:44.692 "data_size": 63488 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "name": "BaseBdev3", 00:10:44.692 "uuid": "f58148fd-3232-5b3e-be81-e7eac596fd85", 00:10:44.692 "is_configured": true, 00:10:44.692 "data_offset": 2048, 00:10:44.692 "data_size": 63488 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "name": "BaseBdev4", 00:10:44.692 "uuid": "e72e8cf6-3b42-5daa-9fe1-a1fab8b94977", 00:10:44.692 "is_configured": true, 00:10:44.692 "data_offset": 2048, 00:10:44.692 "data_size": 63488 00:10:44.692 } 00:10:44.692 ] 00:10:44.692 }' 00:10:44.692 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.692 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.261 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:45.261 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:45.261 [2024-11-28 02:26:18.788400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.202 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.202 "name": "raid_bdev1", 00:10:46.202 "uuid": "d706b14c-b8f0-4132-9f34-939cbc480f10", 00:10:46.202 "strip_size_kb": 64, 00:10:46.202 "state": "online", 00:10:46.202 "raid_level": "concat", 00:10:46.202 "superblock": true, 00:10:46.202 "num_base_bdevs": 4, 00:10:46.202 "num_base_bdevs_discovered": 4, 00:10:46.202 "num_base_bdevs_operational": 4, 00:10:46.202 "base_bdevs_list": [ 00:10:46.202 { 00:10:46.202 "name": "BaseBdev1", 00:10:46.202 "uuid": "be828392-435f-548b-bec3-a583f1699b58", 00:10:46.202 "is_configured": true, 00:10:46.202 "data_offset": 2048, 00:10:46.202 "data_size": 63488 00:10:46.202 }, 00:10:46.202 { 00:10:46.202 "name": "BaseBdev2", 00:10:46.202 "uuid": "9c6648be-e6ca-50d2-9078-f90de0897803", 00:10:46.202 "is_configured": true, 00:10:46.202 "data_offset": 2048, 00:10:46.202 "data_size": 63488 00:10:46.202 }, 00:10:46.202 { 00:10:46.202 "name": "BaseBdev3", 00:10:46.202 "uuid": "f58148fd-3232-5b3e-be81-e7eac596fd85", 00:10:46.202 "is_configured": true, 00:10:46.202 "data_offset": 2048, 00:10:46.202 "data_size": 63488 00:10:46.202 }, 00:10:46.202 { 00:10:46.203 "name": "BaseBdev4", 00:10:46.203 "uuid": "e72e8cf6-3b42-5daa-9fe1-a1fab8b94977", 00:10:46.203 "is_configured": true, 00:10:46.203 "data_offset": 2048, 00:10:46.203 "data_size": 63488 00:10:46.203 } 00:10:46.203 ] 00:10:46.203 }' 00:10:46.203 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.203 02:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.463 02:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.463 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.463 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.722 [2024-11-28 02:26:20.144479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.722 [2024-11-28 02:26:20.144594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.722 [2024-11-28 02:26:20.147266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.722 [2024-11-28 02:26:20.147380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.722 [2024-11-28 02:26:20.147449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.722 [2024-11-28 02:26:20.147509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:46.722 { 00:10:46.722 "results": [ 00:10:46.722 { 00:10:46.722 "job": "raid_bdev1", 00:10:46.722 "core_mask": "0x1", 00:10:46.722 "workload": "randrw", 00:10:46.722 "percentage": 50, 00:10:46.722 "status": "finished", 00:10:46.722 "queue_depth": 1, 00:10:46.722 "io_size": 131072, 00:10:46.722 "runtime": 1.357012, 00:10:46.722 "iops": 14873.118292247967, 00:10:46.722 "mibps": 1859.139786530996, 00:10:46.722 "io_failed": 1, 00:10:46.722 "io_timeout": 0, 00:10:46.722 "avg_latency_us": 92.97164289410784, 00:10:46.722 "min_latency_us": 27.612227074235808, 00:10:46.723 "max_latency_us": 1380.8349344978167 00:10:46.723 } 00:10:46.723 ], 00:10:46.723 "core_count": 1 00:10:46.723 } 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72817 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72817 ']' 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72817 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72817 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72817' 00:10:46.723 killing process with pid 72817 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72817 00:10:46.723 [2024-11-28 02:26:20.193838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.723 02:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72817 00:10:46.983 [2024-11-28 02:26:20.514973] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.364 02:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4NKheqehmZ 00:10:48.364 02:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:48.364 02:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:48.364 02:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:48.364 02:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:48.364 ************************************ 00:10:48.364 END TEST raid_write_error_test 00:10:48.364 ************************************ 00:10:48.364 02:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.364 02:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.364 02:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:48.364 00:10:48.364 real 0m4.676s 00:10:48.364 user 0m5.491s 00:10:48.364 sys 0m0.576s 00:10:48.364 02:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.364 02:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.364 02:26:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:48.364 02:26:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:48.364 02:26:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:48.364 02:26:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.364 02:26:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.364 ************************************ 00:10:48.364 START TEST raid_state_function_test 00:10:48.364 ************************************ 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72964 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72964' 00:10:48.364 Process raid pid: 72964 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72964 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72964 ']' 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.364 02:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.364 [2024-11-28 02:26:21.879110] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:48.364 [2024-11-28 02:26:21.879305] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.624 [2024-11-28 02:26:22.057763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.624 [2024-11-28 02:26:22.172644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.883 [2024-11-28 02:26:22.377899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.883 [2024-11-28 02:26:22.378054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.143 [2024-11-28 02:26:22.699044] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.143 [2024-11-28 02:26:22.699183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.143 [2024-11-28 02:26:22.699217] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.143 [2024-11-28 02:26:22.699247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.143 [2024-11-28 02:26:22.699270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.143 [2024-11-28 02:26:22.699298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.143 [2024-11-28 02:26:22.699337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.143 [2024-11-28 02:26:22.699392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.143 "name": "Existed_Raid", 00:10:49.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.143 "strip_size_kb": 0, 00:10:49.143 "state": "configuring", 00:10:49.143 "raid_level": "raid1", 00:10:49.143 "superblock": false, 00:10:49.143 "num_base_bdevs": 4, 00:10:49.143 "num_base_bdevs_discovered": 0, 00:10:49.143 "num_base_bdevs_operational": 4, 00:10:49.143 "base_bdevs_list": [ 00:10:49.143 { 00:10:49.143 "name": "BaseBdev1", 00:10:49.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.143 "is_configured": false, 00:10:49.143 "data_offset": 0, 00:10:49.143 "data_size": 0 00:10:49.143 }, 00:10:49.143 { 00:10:49.143 "name": "BaseBdev2", 00:10:49.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.143 "is_configured": false, 00:10:49.143 "data_offset": 0, 00:10:49.143 "data_size": 0 00:10:49.143 }, 00:10:49.143 { 00:10:49.143 "name": "BaseBdev3", 00:10:49.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.143 "is_configured": false, 00:10:49.143 "data_offset": 0, 00:10:49.143 "data_size": 0 00:10:49.143 }, 00:10:49.143 { 00:10:49.143 "name": "BaseBdev4", 00:10:49.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.143 "is_configured": false, 00:10:49.143 "data_offset": 0, 00:10:49.143 "data_size": 0 00:10:49.143 } 00:10:49.143 ] 00:10:49.143 }' 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.143 02:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.714 [2024-11-28 02:26:23.142248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.714 [2024-11-28 02:26:23.142296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.714 [2024-11-28 02:26:23.154190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.714 [2024-11-28 02:26:23.154245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.714 [2024-11-28 02:26:23.154255] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.714 [2024-11-28 02:26:23.154266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.714 [2024-11-28 02:26:23.154274] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.714 [2024-11-28 02:26:23.154285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.714 [2024-11-28 02:26:23.154293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.714 [2024-11-28 02:26:23.154303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.714 [2024-11-28 02:26:23.200641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.714 BaseBdev1 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.714 [ 00:10:49.714 { 00:10:49.714 "name": "BaseBdev1", 00:10:49.714 "aliases": [ 00:10:49.714 "fdb20282-24ff-4203-9d82-ca8e8d25b861" 00:10:49.714 ], 00:10:49.714 "product_name": "Malloc disk", 00:10:49.714 "block_size": 512, 00:10:49.714 "num_blocks": 65536, 00:10:49.714 "uuid": "fdb20282-24ff-4203-9d82-ca8e8d25b861", 00:10:49.714 "assigned_rate_limits": { 00:10:49.714 "rw_ios_per_sec": 0, 00:10:49.714 "rw_mbytes_per_sec": 0, 00:10:49.714 "r_mbytes_per_sec": 0, 00:10:49.714 "w_mbytes_per_sec": 0 00:10:49.714 }, 00:10:49.714 "claimed": true, 00:10:49.714 "claim_type": "exclusive_write", 00:10:49.714 "zoned": false, 00:10:49.714 "supported_io_types": { 00:10:49.714 "read": true, 00:10:49.714 "write": true, 00:10:49.714 "unmap": true, 00:10:49.714 "flush": true, 00:10:49.714 "reset": true, 00:10:49.714 "nvme_admin": false, 00:10:49.714 "nvme_io": false, 00:10:49.714 "nvme_io_md": false, 00:10:49.714 "write_zeroes": true, 00:10:49.714 "zcopy": true, 00:10:49.714 "get_zone_info": false, 00:10:49.714 "zone_management": false, 00:10:49.714 "zone_append": false, 00:10:49.714 "compare": false, 00:10:49.714 "compare_and_write": false, 00:10:49.714 "abort": true, 00:10:49.714 "seek_hole": false, 00:10:49.714 "seek_data": false, 00:10:49.714 "copy": true, 00:10:49.714 "nvme_iov_md": false 00:10:49.714 }, 00:10:49.714 "memory_domains": [ 00:10:49.714 { 00:10:49.714 "dma_device_id": "system", 00:10:49.714 "dma_device_type": 1 00:10:49.714 }, 00:10:49.714 { 00:10:49.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.714 "dma_device_type": 2 00:10:49.714 } 00:10:49.714 ], 00:10:49.714 "driver_specific": {} 00:10:49.714 } 00:10:49.714 ] 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.714 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.714 "name": "Existed_Raid", 00:10:49.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.714 "strip_size_kb": 0, 00:10:49.714 "state": "configuring", 00:10:49.714 "raid_level": "raid1", 00:10:49.714 "superblock": false, 00:10:49.714 "num_base_bdevs": 4, 00:10:49.714 "num_base_bdevs_discovered": 1, 00:10:49.715 "num_base_bdevs_operational": 4, 00:10:49.715 "base_bdevs_list": [ 00:10:49.715 { 00:10:49.715 "name": "BaseBdev1", 00:10:49.715 "uuid": "fdb20282-24ff-4203-9d82-ca8e8d25b861", 00:10:49.715 "is_configured": true, 00:10:49.715 "data_offset": 0, 00:10:49.715 "data_size": 65536 00:10:49.715 }, 00:10:49.715 { 00:10:49.715 "name": "BaseBdev2", 00:10:49.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.715 "is_configured": false, 00:10:49.715 "data_offset": 0, 00:10:49.715 "data_size": 0 00:10:49.715 }, 00:10:49.715 { 00:10:49.715 "name": "BaseBdev3", 00:10:49.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.715 "is_configured": false, 00:10:49.715 "data_offset": 0, 00:10:49.715 "data_size": 0 00:10:49.715 }, 00:10:49.715 { 00:10:49.715 "name": "BaseBdev4", 00:10:49.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.715 "is_configured": false, 00:10:49.715 "data_offset": 0, 00:10:49.715 "data_size": 0 00:10:49.715 } 00:10:49.715 ] 00:10:49.715 }' 00:10:49.715 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.715 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.286 [2024-11-28 02:26:23.663935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.286 [2024-11-28 02:26:23.663992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.286 [2024-11-28 02:26:23.675931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.286 [2024-11-28 02:26:23.677696] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.286 [2024-11-28 02:26:23.677750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.286 [2024-11-28 02:26:23.677762] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.286 [2024-11-28 02:26:23.677775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.286 [2024-11-28 02:26:23.677783] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:50.286 [2024-11-28 02:26:23.677795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.286 "name": "Existed_Raid", 00:10:50.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.286 "strip_size_kb": 0, 00:10:50.286 "state": "configuring", 00:10:50.286 "raid_level": "raid1", 00:10:50.286 "superblock": false, 00:10:50.286 "num_base_bdevs": 4, 00:10:50.286 "num_base_bdevs_discovered": 1, 00:10:50.286 "num_base_bdevs_operational": 4, 00:10:50.286 "base_bdevs_list": [ 00:10:50.286 { 00:10:50.286 "name": "BaseBdev1", 00:10:50.286 "uuid": "fdb20282-24ff-4203-9d82-ca8e8d25b861", 00:10:50.286 "is_configured": true, 00:10:50.286 "data_offset": 0, 00:10:50.286 "data_size": 65536 00:10:50.286 }, 00:10:50.286 { 00:10:50.286 "name": "BaseBdev2", 00:10:50.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.286 "is_configured": false, 00:10:50.286 "data_offset": 0, 00:10:50.286 "data_size": 0 00:10:50.286 }, 00:10:50.286 { 00:10:50.286 "name": "BaseBdev3", 00:10:50.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.286 "is_configured": false, 00:10:50.286 "data_offset": 0, 00:10:50.286 "data_size": 0 00:10:50.286 }, 00:10:50.286 { 00:10:50.286 "name": "BaseBdev4", 00:10:50.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.286 "is_configured": false, 00:10:50.286 "data_offset": 0, 00:10:50.286 "data_size": 0 00:10:50.286 } 00:10:50.286 ] 00:10:50.286 }' 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.286 02:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.547 [2024-11-28 02:26:24.124342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.547 BaseBdev2 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.547 [ 00:10:50.547 { 00:10:50.547 "name": "BaseBdev2", 00:10:50.547 "aliases": [ 00:10:50.547 "4b717992-7370-4aa0-8b4d-5c328124b328" 00:10:50.547 ], 00:10:50.547 "product_name": "Malloc disk", 00:10:50.547 "block_size": 512, 00:10:50.547 "num_blocks": 65536, 00:10:50.547 "uuid": "4b717992-7370-4aa0-8b4d-5c328124b328", 00:10:50.547 "assigned_rate_limits": { 00:10:50.547 "rw_ios_per_sec": 0, 00:10:50.547 "rw_mbytes_per_sec": 0, 00:10:50.547 "r_mbytes_per_sec": 0, 00:10:50.547 "w_mbytes_per_sec": 0 00:10:50.547 }, 00:10:50.547 "claimed": true, 00:10:50.547 "claim_type": "exclusive_write", 00:10:50.547 "zoned": false, 00:10:50.547 "supported_io_types": { 00:10:50.547 "read": true, 00:10:50.547 "write": true, 00:10:50.547 "unmap": true, 00:10:50.547 "flush": true, 00:10:50.547 "reset": true, 00:10:50.547 "nvme_admin": false, 00:10:50.547 "nvme_io": false, 00:10:50.547 "nvme_io_md": false, 00:10:50.547 "write_zeroes": true, 00:10:50.547 "zcopy": true, 00:10:50.547 "get_zone_info": false, 00:10:50.547 "zone_management": false, 00:10:50.547 "zone_append": false, 00:10:50.547 "compare": false, 00:10:50.547 "compare_and_write": false, 00:10:50.547 "abort": true, 00:10:50.547 "seek_hole": false, 00:10:50.547 "seek_data": false, 00:10:50.547 "copy": true, 00:10:50.547 "nvme_iov_md": false 00:10:50.547 }, 00:10:50.547 "memory_domains": [ 00:10:50.547 { 00:10:50.547 "dma_device_id": "system", 00:10:50.547 "dma_device_type": 1 00:10:50.547 }, 00:10:50.547 { 00:10:50.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.547 "dma_device_type": 2 00:10:50.547 } 00:10:50.547 ], 00:10:50.547 "driver_specific": {} 00:10:50.547 } 00:10:50.547 ] 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.547 "name": "Existed_Raid", 00:10:50.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.547 "strip_size_kb": 0, 00:10:50.547 "state": "configuring", 00:10:50.547 "raid_level": "raid1", 00:10:50.547 "superblock": false, 00:10:50.547 "num_base_bdevs": 4, 00:10:50.547 "num_base_bdevs_discovered": 2, 00:10:50.547 "num_base_bdevs_operational": 4, 00:10:50.547 "base_bdevs_list": [ 00:10:50.547 { 00:10:50.547 "name": "BaseBdev1", 00:10:50.547 "uuid": "fdb20282-24ff-4203-9d82-ca8e8d25b861", 00:10:50.547 "is_configured": true, 00:10:50.547 "data_offset": 0, 00:10:50.547 "data_size": 65536 00:10:50.547 }, 00:10:50.547 { 00:10:50.547 "name": "BaseBdev2", 00:10:50.547 "uuid": "4b717992-7370-4aa0-8b4d-5c328124b328", 00:10:50.547 "is_configured": true, 00:10:50.547 "data_offset": 0, 00:10:50.547 "data_size": 65536 00:10:50.547 }, 00:10:50.547 { 00:10:50.547 "name": "BaseBdev3", 00:10:50.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.547 "is_configured": false, 00:10:50.547 "data_offset": 0, 00:10:50.547 "data_size": 0 00:10:50.547 }, 00:10:50.547 { 00:10:50.547 "name": "BaseBdev4", 00:10:50.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.547 "is_configured": false, 00:10:50.547 "data_offset": 0, 00:10:50.547 "data_size": 0 00:10:50.547 } 00:10:50.547 ] 00:10:50.547 }' 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.547 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.117 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.117 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.117 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.117 [2024-11-28 02:26:24.649739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.117 BaseBdev3 00:10:51.117 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.117 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.118 [ 00:10:51.118 { 00:10:51.118 "name": "BaseBdev3", 00:10:51.118 "aliases": [ 00:10:51.118 "8b70851f-4d98-4672-a6e8-74c8dcfcac57" 00:10:51.118 ], 00:10:51.118 "product_name": "Malloc disk", 00:10:51.118 "block_size": 512, 00:10:51.118 "num_blocks": 65536, 00:10:51.118 "uuid": "8b70851f-4d98-4672-a6e8-74c8dcfcac57", 00:10:51.118 "assigned_rate_limits": { 00:10:51.118 "rw_ios_per_sec": 0, 00:10:51.118 "rw_mbytes_per_sec": 0, 00:10:51.118 "r_mbytes_per_sec": 0, 00:10:51.118 "w_mbytes_per_sec": 0 00:10:51.118 }, 00:10:51.118 "claimed": true, 00:10:51.118 "claim_type": "exclusive_write", 00:10:51.118 "zoned": false, 00:10:51.118 "supported_io_types": { 00:10:51.118 "read": true, 00:10:51.118 "write": true, 00:10:51.118 "unmap": true, 00:10:51.118 "flush": true, 00:10:51.118 "reset": true, 00:10:51.118 "nvme_admin": false, 00:10:51.118 "nvme_io": false, 00:10:51.118 "nvme_io_md": false, 00:10:51.118 "write_zeroes": true, 00:10:51.118 "zcopy": true, 00:10:51.118 "get_zone_info": false, 00:10:51.118 "zone_management": false, 00:10:51.118 "zone_append": false, 00:10:51.118 "compare": false, 00:10:51.118 "compare_and_write": false, 00:10:51.118 "abort": true, 00:10:51.118 "seek_hole": false, 00:10:51.118 "seek_data": false, 00:10:51.118 "copy": true, 00:10:51.118 "nvme_iov_md": false 00:10:51.118 }, 00:10:51.118 "memory_domains": [ 00:10:51.118 { 00:10:51.118 "dma_device_id": "system", 00:10:51.118 "dma_device_type": 1 00:10:51.118 }, 00:10:51.118 { 00:10:51.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.118 "dma_device_type": 2 00:10:51.118 } 00:10:51.118 ], 00:10:51.118 "driver_specific": {} 00:10:51.118 } 00:10:51.118 ] 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.118 "name": "Existed_Raid", 00:10:51.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.118 "strip_size_kb": 0, 00:10:51.118 "state": "configuring", 00:10:51.118 "raid_level": "raid1", 00:10:51.118 "superblock": false, 00:10:51.118 "num_base_bdevs": 4, 00:10:51.118 "num_base_bdevs_discovered": 3, 00:10:51.118 "num_base_bdevs_operational": 4, 00:10:51.118 "base_bdevs_list": [ 00:10:51.118 { 00:10:51.118 "name": "BaseBdev1", 00:10:51.118 "uuid": "fdb20282-24ff-4203-9d82-ca8e8d25b861", 00:10:51.118 "is_configured": true, 00:10:51.118 "data_offset": 0, 00:10:51.118 "data_size": 65536 00:10:51.118 }, 00:10:51.118 { 00:10:51.118 "name": "BaseBdev2", 00:10:51.118 "uuid": "4b717992-7370-4aa0-8b4d-5c328124b328", 00:10:51.118 "is_configured": true, 00:10:51.118 "data_offset": 0, 00:10:51.118 "data_size": 65536 00:10:51.118 }, 00:10:51.118 { 00:10:51.118 "name": "BaseBdev3", 00:10:51.118 "uuid": "8b70851f-4d98-4672-a6e8-74c8dcfcac57", 00:10:51.118 "is_configured": true, 00:10:51.118 "data_offset": 0, 00:10:51.118 "data_size": 65536 00:10:51.118 }, 00:10:51.118 { 00:10:51.118 "name": "BaseBdev4", 00:10:51.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.118 "is_configured": false, 00:10:51.118 "data_offset": 0, 00:10:51.118 "data_size": 0 00:10:51.118 } 00:10:51.118 ] 00:10:51.118 }' 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.118 02:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.688 [2024-11-28 02:26:25.166450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:51.688 [2024-11-28 02:26:25.166514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:51.688 [2024-11-28 02:26:25.166524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:51.688 [2024-11-28 02:26:25.166797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:51.688 [2024-11-28 02:26:25.167029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:51.688 [2024-11-28 02:26:25.167049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:51.688 [2024-11-28 02:26:25.167350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.688 BaseBdev4 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.688 [ 00:10:51.688 { 00:10:51.688 "name": "BaseBdev4", 00:10:51.688 "aliases": [ 00:10:51.688 "8efbd4bf-c3b0-489b-a41c-0dd29639da2a" 00:10:51.688 ], 00:10:51.688 "product_name": "Malloc disk", 00:10:51.688 "block_size": 512, 00:10:51.688 "num_blocks": 65536, 00:10:51.688 "uuid": "8efbd4bf-c3b0-489b-a41c-0dd29639da2a", 00:10:51.688 "assigned_rate_limits": { 00:10:51.688 "rw_ios_per_sec": 0, 00:10:51.688 "rw_mbytes_per_sec": 0, 00:10:51.688 "r_mbytes_per_sec": 0, 00:10:51.688 "w_mbytes_per_sec": 0 00:10:51.688 }, 00:10:51.688 "claimed": true, 00:10:51.688 "claim_type": "exclusive_write", 00:10:51.688 "zoned": false, 00:10:51.688 "supported_io_types": { 00:10:51.688 "read": true, 00:10:51.688 "write": true, 00:10:51.688 "unmap": true, 00:10:51.688 "flush": true, 00:10:51.688 "reset": true, 00:10:51.688 "nvme_admin": false, 00:10:51.688 "nvme_io": false, 00:10:51.688 "nvme_io_md": false, 00:10:51.688 "write_zeroes": true, 00:10:51.688 "zcopy": true, 00:10:51.688 "get_zone_info": false, 00:10:51.688 "zone_management": false, 00:10:51.688 "zone_append": false, 00:10:51.688 "compare": false, 00:10:51.688 "compare_and_write": false, 00:10:51.688 "abort": true, 00:10:51.688 "seek_hole": false, 00:10:51.688 "seek_data": false, 00:10:51.688 "copy": true, 00:10:51.688 "nvme_iov_md": false 00:10:51.688 }, 00:10:51.688 "memory_domains": [ 00:10:51.688 { 00:10:51.688 "dma_device_id": "system", 00:10:51.688 "dma_device_type": 1 00:10:51.688 }, 00:10:51.688 { 00:10:51.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.688 "dma_device_type": 2 00:10:51.688 } 00:10:51.688 ], 00:10:51.688 "driver_specific": {} 00:10:51.688 } 00:10:51.688 ] 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.688 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.688 "name": "Existed_Raid", 00:10:51.688 "uuid": "4feac78d-3098-42c0-b54a-10730614b440", 00:10:51.688 "strip_size_kb": 0, 00:10:51.688 "state": "online", 00:10:51.688 "raid_level": "raid1", 00:10:51.688 "superblock": false, 00:10:51.688 "num_base_bdevs": 4, 00:10:51.688 "num_base_bdevs_discovered": 4, 00:10:51.688 "num_base_bdevs_operational": 4, 00:10:51.688 "base_bdevs_list": [ 00:10:51.688 { 00:10:51.688 "name": "BaseBdev1", 00:10:51.688 "uuid": "fdb20282-24ff-4203-9d82-ca8e8d25b861", 00:10:51.688 "is_configured": true, 00:10:51.688 "data_offset": 0, 00:10:51.688 "data_size": 65536 00:10:51.688 }, 00:10:51.688 { 00:10:51.688 "name": "BaseBdev2", 00:10:51.688 "uuid": "4b717992-7370-4aa0-8b4d-5c328124b328", 00:10:51.688 "is_configured": true, 00:10:51.688 "data_offset": 0, 00:10:51.688 "data_size": 65536 00:10:51.688 }, 00:10:51.688 { 00:10:51.688 "name": "BaseBdev3", 00:10:51.688 "uuid": "8b70851f-4d98-4672-a6e8-74c8dcfcac57", 00:10:51.688 "is_configured": true, 00:10:51.688 "data_offset": 0, 00:10:51.688 "data_size": 65536 00:10:51.688 }, 00:10:51.688 { 00:10:51.688 "name": "BaseBdev4", 00:10:51.688 "uuid": "8efbd4bf-c3b0-489b-a41c-0dd29639da2a", 00:10:51.689 "is_configured": true, 00:10:51.689 "data_offset": 0, 00:10:51.689 "data_size": 65536 00:10:51.689 } 00:10:51.689 ] 00:10:51.689 }' 00:10:51.689 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.689 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.259 [2024-11-28 02:26:25.658089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.259 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.259 "name": "Existed_Raid", 00:10:52.259 "aliases": [ 00:10:52.259 "4feac78d-3098-42c0-b54a-10730614b440" 00:10:52.259 ], 00:10:52.259 "product_name": "Raid Volume", 00:10:52.259 "block_size": 512, 00:10:52.259 "num_blocks": 65536, 00:10:52.259 "uuid": "4feac78d-3098-42c0-b54a-10730614b440", 00:10:52.259 "assigned_rate_limits": { 00:10:52.259 "rw_ios_per_sec": 0, 00:10:52.259 "rw_mbytes_per_sec": 0, 00:10:52.259 "r_mbytes_per_sec": 0, 00:10:52.259 "w_mbytes_per_sec": 0 00:10:52.259 }, 00:10:52.259 "claimed": false, 00:10:52.259 "zoned": false, 00:10:52.259 "supported_io_types": { 00:10:52.259 "read": true, 00:10:52.259 "write": true, 00:10:52.259 "unmap": false, 00:10:52.259 "flush": false, 00:10:52.259 "reset": true, 00:10:52.259 "nvme_admin": false, 00:10:52.259 "nvme_io": false, 00:10:52.259 "nvme_io_md": false, 00:10:52.259 "write_zeroes": true, 00:10:52.259 "zcopy": false, 00:10:52.259 "get_zone_info": false, 00:10:52.259 "zone_management": false, 00:10:52.259 "zone_append": false, 00:10:52.259 "compare": false, 00:10:52.259 "compare_and_write": false, 00:10:52.259 "abort": false, 00:10:52.259 "seek_hole": false, 00:10:52.259 "seek_data": false, 00:10:52.259 "copy": false, 00:10:52.259 "nvme_iov_md": false 00:10:52.259 }, 00:10:52.259 "memory_domains": [ 00:10:52.259 { 00:10:52.259 "dma_device_id": "system", 00:10:52.259 "dma_device_type": 1 00:10:52.259 }, 00:10:52.259 { 00:10:52.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.259 "dma_device_type": 2 00:10:52.259 }, 00:10:52.259 { 00:10:52.259 "dma_device_id": "system", 00:10:52.259 "dma_device_type": 1 00:10:52.259 }, 00:10:52.259 { 00:10:52.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.259 "dma_device_type": 2 00:10:52.259 }, 00:10:52.260 { 00:10:52.260 "dma_device_id": "system", 00:10:52.260 "dma_device_type": 1 00:10:52.260 }, 00:10:52.260 { 00:10:52.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.260 "dma_device_type": 2 00:10:52.260 }, 00:10:52.260 { 00:10:52.260 "dma_device_id": "system", 00:10:52.260 "dma_device_type": 1 00:10:52.260 }, 00:10:52.260 { 00:10:52.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.260 "dma_device_type": 2 00:10:52.260 } 00:10:52.260 ], 00:10:52.260 "driver_specific": { 00:10:52.260 "raid": { 00:10:52.260 "uuid": "4feac78d-3098-42c0-b54a-10730614b440", 00:10:52.260 "strip_size_kb": 0, 00:10:52.260 "state": "online", 00:10:52.260 "raid_level": "raid1", 00:10:52.260 "superblock": false, 00:10:52.260 "num_base_bdevs": 4, 00:10:52.260 "num_base_bdevs_discovered": 4, 00:10:52.260 "num_base_bdevs_operational": 4, 00:10:52.260 "base_bdevs_list": [ 00:10:52.260 { 00:10:52.260 "name": "BaseBdev1", 00:10:52.260 "uuid": "fdb20282-24ff-4203-9d82-ca8e8d25b861", 00:10:52.260 "is_configured": true, 00:10:52.260 "data_offset": 0, 00:10:52.260 "data_size": 65536 00:10:52.260 }, 00:10:52.260 { 00:10:52.260 "name": "BaseBdev2", 00:10:52.260 "uuid": "4b717992-7370-4aa0-8b4d-5c328124b328", 00:10:52.260 "is_configured": true, 00:10:52.260 "data_offset": 0, 00:10:52.260 "data_size": 65536 00:10:52.260 }, 00:10:52.260 { 00:10:52.260 "name": "BaseBdev3", 00:10:52.260 "uuid": "8b70851f-4d98-4672-a6e8-74c8dcfcac57", 00:10:52.260 "is_configured": true, 00:10:52.260 "data_offset": 0, 00:10:52.260 "data_size": 65536 00:10:52.260 }, 00:10:52.260 { 00:10:52.260 "name": "BaseBdev4", 00:10:52.260 "uuid": "8efbd4bf-c3b0-489b-a41c-0dd29639da2a", 00:10:52.260 "is_configured": true, 00:10:52.260 "data_offset": 0, 00:10:52.260 "data_size": 65536 00:10:52.260 } 00:10:52.260 ] 00:10:52.260 } 00:10:52.260 } 00:10:52.260 }' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:52.260 BaseBdev2 00:10:52.260 BaseBdev3 00:10:52.260 BaseBdev4' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.260 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.520 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.520 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.520 02:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.520 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.520 02:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.520 [2024-11-28 02:26:25.945249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.520 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.520 "name": "Existed_Raid", 00:10:52.520 "uuid": "4feac78d-3098-42c0-b54a-10730614b440", 00:10:52.520 "strip_size_kb": 0, 00:10:52.520 "state": "online", 00:10:52.520 "raid_level": "raid1", 00:10:52.520 "superblock": false, 00:10:52.521 "num_base_bdevs": 4, 00:10:52.521 "num_base_bdevs_discovered": 3, 00:10:52.521 "num_base_bdevs_operational": 3, 00:10:52.521 "base_bdevs_list": [ 00:10:52.521 { 00:10:52.521 "name": null, 00:10:52.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.521 "is_configured": false, 00:10:52.521 "data_offset": 0, 00:10:52.521 "data_size": 65536 00:10:52.521 }, 00:10:52.521 { 00:10:52.521 "name": "BaseBdev2", 00:10:52.521 "uuid": "4b717992-7370-4aa0-8b4d-5c328124b328", 00:10:52.521 "is_configured": true, 00:10:52.521 "data_offset": 0, 00:10:52.521 "data_size": 65536 00:10:52.521 }, 00:10:52.521 { 00:10:52.521 "name": "BaseBdev3", 00:10:52.521 "uuid": "8b70851f-4d98-4672-a6e8-74c8dcfcac57", 00:10:52.521 "is_configured": true, 00:10:52.521 "data_offset": 0, 00:10:52.521 "data_size": 65536 00:10:52.521 }, 00:10:52.521 { 00:10:52.521 "name": "BaseBdev4", 00:10:52.521 "uuid": "8efbd4bf-c3b0-489b-a41c-0dd29639da2a", 00:10:52.521 "is_configured": true, 00:10:52.521 "data_offset": 0, 00:10:52.521 "data_size": 65536 00:10:52.521 } 00:10:52.521 ] 00:10:52.521 }' 00:10:52.521 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.521 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.090 [2024-11-28 02:26:26.538802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.090 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.091 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.091 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.091 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.091 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.091 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.091 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.091 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:53.091 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.091 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.091 [2024-11-28 02:26:26.694562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.351 [2024-11-28 02:26:26.845374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:53.351 [2024-11-28 02:26:26.845531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.351 [2024-11-28 02:26:26.939378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.351 [2024-11-28 02:26:26.939514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.351 [2024-11-28 02:26:26.939564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.351 02:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.612 BaseBdev2 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.612 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.612 [ 00:10:53.612 { 00:10:53.612 "name": "BaseBdev2", 00:10:53.612 "aliases": [ 00:10:53.612 "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96" 00:10:53.612 ], 00:10:53.612 "product_name": "Malloc disk", 00:10:53.612 "block_size": 512, 00:10:53.612 "num_blocks": 65536, 00:10:53.612 "uuid": "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96", 00:10:53.612 "assigned_rate_limits": { 00:10:53.612 "rw_ios_per_sec": 0, 00:10:53.612 "rw_mbytes_per_sec": 0, 00:10:53.612 "r_mbytes_per_sec": 0, 00:10:53.612 "w_mbytes_per_sec": 0 00:10:53.612 }, 00:10:53.612 "claimed": false, 00:10:53.612 "zoned": false, 00:10:53.612 "supported_io_types": { 00:10:53.612 "read": true, 00:10:53.612 "write": true, 00:10:53.613 "unmap": true, 00:10:53.613 "flush": true, 00:10:53.613 "reset": true, 00:10:53.613 "nvme_admin": false, 00:10:53.613 "nvme_io": false, 00:10:53.613 "nvme_io_md": false, 00:10:53.613 "write_zeroes": true, 00:10:53.613 "zcopy": true, 00:10:53.613 "get_zone_info": false, 00:10:53.613 "zone_management": false, 00:10:53.613 "zone_append": false, 00:10:53.613 "compare": false, 00:10:53.613 "compare_and_write": false, 00:10:53.613 "abort": true, 00:10:53.613 "seek_hole": false, 00:10:53.613 "seek_data": false, 00:10:53.613 "copy": true, 00:10:53.613 "nvme_iov_md": false 00:10:53.613 }, 00:10:53.613 "memory_domains": [ 00:10:53.613 { 00:10:53.613 "dma_device_id": "system", 00:10:53.613 "dma_device_type": 1 00:10:53.613 }, 00:10:53.613 { 00:10:53.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.613 "dma_device_type": 2 00:10:53.613 } 00:10:53.613 ], 00:10:53.613 "driver_specific": {} 00:10:53.613 } 00:10:53.613 ] 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.613 BaseBdev3 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.613 [ 00:10:53.613 { 00:10:53.613 "name": "BaseBdev3", 00:10:53.613 "aliases": [ 00:10:53.613 "8e165036-2103-461a-aa3c-137f9dd12800" 00:10:53.613 ], 00:10:53.613 "product_name": "Malloc disk", 00:10:53.613 "block_size": 512, 00:10:53.613 "num_blocks": 65536, 00:10:53.613 "uuid": "8e165036-2103-461a-aa3c-137f9dd12800", 00:10:53.613 "assigned_rate_limits": { 00:10:53.613 "rw_ios_per_sec": 0, 00:10:53.613 "rw_mbytes_per_sec": 0, 00:10:53.613 "r_mbytes_per_sec": 0, 00:10:53.613 "w_mbytes_per_sec": 0 00:10:53.613 }, 00:10:53.613 "claimed": false, 00:10:53.613 "zoned": false, 00:10:53.613 "supported_io_types": { 00:10:53.613 "read": true, 00:10:53.613 "write": true, 00:10:53.613 "unmap": true, 00:10:53.613 "flush": true, 00:10:53.613 "reset": true, 00:10:53.613 "nvme_admin": false, 00:10:53.613 "nvme_io": false, 00:10:53.613 "nvme_io_md": false, 00:10:53.613 "write_zeroes": true, 00:10:53.613 "zcopy": true, 00:10:53.613 "get_zone_info": false, 00:10:53.613 "zone_management": false, 00:10:53.613 "zone_append": false, 00:10:53.613 "compare": false, 00:10:53.613 "compare_and_write": false, 00:10:53.613 "abort": true, 00:10:53.613 "seek_hole": false, 00:10:53.613 "seek_data": false, 00:10:53.613 "copy": true, 00:10:53.613 "nvme_iov_md": false 00:10:53.613 }, 00:10:53.613 "memory_domains": [ 00:10:53.613 { 00:10:53.613 "dma_device_id": "system", 00:10:53.613 "dma_device_type": 1 00:10:53.613 }, 00:10:53.613 { 00:10:53.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.613 "dma_device_type": 2 00:10:53.613 } 00:10:53.613 ], 00:10:53.613 "driver_specific": {} 00:10:53.613 } 00:10:53.613 ] 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.613 BaseBdev4 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.613 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.613 [ 00:10:53.613 { 00:10:53.613 "name": "BaseBdev4", 00:10:53.613 "aliases": [ 00:10:53.613 "645854e6-02d9-4dee-90e2-9eca353332a6" 00:10:53.613 ], 00:10:53.613 "product_name": "Malloc disk", 00:10:53.613 "block_size": 512, 00:10:53.613 "num_blocks": 65536, 00:10:53.613 "uuid": "645854e6-02d9-4dee-90e2-9eca353332a6", 00:10:53.613 "assigned_rate_limits": { 00:10:53.613 "rw_ios_per_sec": 0, 00:10:53.613 "rw_mbytes_per_sec": 0, 00:10:53.613 "r_mbytes_per_sec": 0, 00:10:53.613 "w_mbytes_per_sec": 0 00:10:53.613 }, 00:10:53.613 "claimed": false, 00:10:53.613 "zoned": false, 00:10:53.614 "supported_io_types": { 00:10:53.614 "read": true, 00:10:53.614 "write": true, 00:10:53.614 "unmap": true, 00:10:53.614 "flush": true, 00:10:53.614 "reset": true, 00:10:53.614 "nvme_admin": false, 00:10:53.614 "nvme_io": false, 00:10:53.614 "nvme_io_md": false, 00:10:53.614 "write_zeroes": true, 00:10:53.614 "zcopy": true, 00:10:53.614 "get_zone_info": false, 00:10:53.614 "zone_management": false, 00:10:53.614 "zone_append": false, 00:10:53.614 "compare": false, 00:10:53.614 "compare_and_write": false, 00:10:53.614 "abort": true, 00:10:53.614 "seek_hole": false, 00:10:53.614 "seek_data": false, 00:10:53.614 "copy": true, 00:10:53.614 "nvme_iov_md": false 00:10:53.614 }, 00:10:53.614 "memory_domains": [ 00:10:53.614 { 00:10:53.614 "dma_device_id": "system", 00:10:53.614 "dma_device_type": 1 00:10:53.614 }, 00:10:53.614 { 00:10:53.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.614 "dma_device_type": 2 00:10:53.614 } 00:10:53.614 ], 00:10:53.614 "driver_specific": {} 00:10:53.614 } 00:10:53.614 ] 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.614 [2024-11-28 02:26:27.233719] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.614 [2024-11-28 02:26:27.233840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.614 [2024-11-28 02:26:27.233885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.614 [2024-11-28 02:26:27.235717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.614 [2024-11-28 02:26:27.235825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.614 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.874 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.874 "name": "Existed_Raid", 00:10:53.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.874 "strip_size_kb": 0, 00:10:53.874 "state": "configuring", 00:10:53.874 "raid_level": "raid1", 00:10:53.874 "superblock": false, 00:10:53.874 "num_base_bdevs": 4, 00:10:53.874 "num_base_bdevs_discovered": 3, 00:10:53.874 "num_base_bdevs_operational": 4, 00:10:53.874 "base_bdevs_list": [ 00:10:53.874 { 00:10:53.874 "name": "BaseBdev1", 00:10:53.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.874 "is_configured": false, 00:10:53.874 "data_offset": 0, 00:10:53.874 "data_size": 0 00:10:53.874 }, 00:10:53.874 { 00:10:53.874 "name": "BaseBdev2", 00:10:53.874 "uuid": "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96", 00:10:53.874 "is_configured": true, 00:10:53.874 "data_offset": 0, 00:10:53.874 "data_size": 65536 00:10:53.874 }, 00:10:53.874 { 00:10:53.874 "name": "BaseBdev3", 00:10:53.874 "uuid": "8e165036-2103-461a-aa3c-137f9dd12800", 00:10:53.874 "is_configured": true, 00:10:53.874 "data_offset": 0, 00:10:53.874 "data_size": 65536 00:10:53.874 }, 00:10:53.874 { 00:10:53.874 "name": "BaseBdev4", 00:10:53.874 "uuid": "645854e6-02d9-4dee-90e2-9eca353332a6", 00:10:53.874 "is_configured": true, 00:10:53.874 "data_offset": 0, 00:10:53.874 "data_size": 65536 00:10:53.874 } 00:10:53.874 ] 00:10:53.874 }' 00:10:53.874 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.874 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.135 [2024-11-28 02:26:27.693069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.135 "name": "Existed_Raid", 00:10:54.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.135 "strip_size_kb": 0, 00:10:54.135 "state": "configuring", 00:10:54.135 "raid_level": "raid1", 00:10:54.135 "superblock": false, 00:10:54.135 "num_base_bdevs": 4, 00:10:54.135 "num_base_bdevs_discovered": 2, 00:10:54.135 "num_base_bdevs_operational": 4, 00:10:54.135 "base_bdevs_list": [ 00:10:54.135 { 00:10:54.135 "name": "BaseBdev1", 00:10:54.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.135 "is_configured": false, 00:10:54.135 "data_offset": 0, 00:10:54.135 "data_size": 0 00:10:54.135 }, 00:10:54.135 { 00:10:54.135 "name": null, 00:10:54.135 "uuid": "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96", 00:10:54.135 "is_configured": false, 00:10:54.135 "data_offset": 0, 00:10:54.135 "data_size": 65536 00:10:54.135 }, 00:10:54.135 { 00:10:54.135 "name": "BaseBdev3", 00:10:54.135 "uuid": "8e165036-2103-461a-aa3c-137f9dd12800", 00:10:54.135 "is_configured": true, 00:10:54.135 "data_offset": 0, 00:10:54.135 "data_size": 65536 00:10:54.135 }, 00:10:54.135 { 00:10:54.135 "name": "BaseBdev4", 00:10:54.135 "uuid": "645854e6-02d9-4dee-90e2-9eca353332a6", 00:10:54.135 "is_configured": true, 00:10:54.135 "data_offset": 0, 00:10:54.135 "data_size": 65536 00:10:54.135 } 00:10:54.135 ] 00:10:54.135 }' 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.135 02:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.705 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.705 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.705 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.705 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.705 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.705 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:54.705 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.705 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.706 [2024-11-28 02:26:28.221111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.706 BaseBdev1 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.706 [ 00:10:54.706 { 00:10:54.706 "name": "BaseBdev1", 00:10:54.706 "aliases": [ 00:10:54.706 "a68ece2b-8729-4750-b28a-edadfc7e0eba" 00:10:54.706 ], 00:10:54.706 "product_name": "Malloc disk", 00:10:54.706 "block_size": 512, 00:10:54.706 "num_blocks": 65536, 00:10:54.706 "uuid": "a68ece2b-8729-4750-b28a-edadfc7e0eba", 00:10:54.706 "assigned_rate_limits": { 00:10:54.706 "rw_ios_per_sec": 0, 00:10:54.706 "rw_mbytes_per_sec": 0, 00:10:54.706 "r_mbytes_per_sec": 0, 00:10:54.706 "w_mbytes_per_sec": 0 00:10:54.706 }, 00:10:54.706 "claimed": true, 00:10:54.706 "claim_type": "exclusive_write", 00:10:54.706 "zoned": false, 00:10:54.706 "supported_io_types": { 00:10:54.706 "read": true, 00:10:54.706 "write": true, 00:10:54.706 "unmap": true, 00:10:54.706 "flush": true, 00:10:54.706 "reset": true, 00:10:54.706 "nvme_admin": false, 00:10:54.706 "nvme_io": false, 00:10:54.706 "nvme_io_md": false, 00:10:54.706 "write_zeroes": true, 00:10:54.706 "zcopy": true, 00:10:54.706 "get_zone_info": false, 00:10:54.706 "zone_management": false, 00:10:54.706 "zone_append": false, 00:10:54.706 "compare": false, 00:10:54.706 "compare_and_write": false, 00:10:54.706 "abort": true, 00:10:54.706 "seek_hole": false, 00:10:54.706 "seek_data": false, 00:10:54.706 "copy": true, 00:10:54.706 "nvme_iov_md": false 00:10:54.706 }, 00:10:54.706 "memory_domains": [ 00:10:54.706 { 00:10:54.706 "dma_device_id": "system", 00:10:54.706 "dma_device_type": 1 00:10:54.706 }, 00:10:54.706 { 00:10:54.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.706 "dma_device_type": 2 00:10:54.706 } 00:10:54.706 ], 00:10:54.706 "driver_specific": {} 00:10:54.706 } 00:10:54.706 ] 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.706 "name": "Existed_Raid", 00:10:54.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.706 "strip_size_kb": 0, 00:10:54.706 "state": "configuring", 00:10:54.706 "raid_level": "raid1", 00:10:54.706 "superblock": false, 00:10:54.706 "num_base_bdevs": 4, 00:10:54.706 "num_base_bdevs_discovered": 3, 00:10:54.706 "num_base_bdevs_operational": 4, 00:10:54.706 "base_bdevs_list": [ 00:10:54.706 { 00:10:54.706 "name": "BaseBdev1", 00:10:54.706 "uuid": "a68ece2b-8729-4750-b28a-edadfc7e0eba", 00:10:54.706 "is_configured": true, 00:10:54.706 "data_offset": 0, 00:10:54.706 "data_size": 65536 00:10:54.706 }, 00:10:54.706 { 00:10:54.706 "name": null, 00:10:54.706 "uuid": "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96", 00:10:54.706 "is_configured": false, 00:10:54.706 "data_offset": 0, 00:10:54.706 "data_size": 65536 00:10:54.706 }, 00:10:54.706 { 00:10:54.706 "name": "BaseBdev3", 00:10:54.706 "uuid": "8e165036-2103-461a-aa3c-137f9dd12800", 00:10:54.706 "is_configured": true, 00:10:54.706 "data_offset": 0, 00:10:54.706 "data_size": 65536 00:10:54.706 }, 00:10:54.706 { 00:10:54.706 "name": "BaseBdev4", 00:10:54.706 "uuid": "645854e6-02d9-4dee-90e2-9eca353332a6", 00:10:54.706 "is_configured": true, 00:10:54.706 "data_offset": 0, 00:10:54.706 "data_size": 65536 00:10:54.706 } 00:10:54.706 ] 00:10:54.706 }' 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.706 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.276 [2024-11-28 02:26:28.724395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.276 "name": "Existed_Raid", 00:10:55.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.276 "strip_size_kb": 0, 00:10:55.276 "state": "configuring", 00:10:55.276 "raid_level": "raid1", 00:10:55.276 "superblock": false, 00:10:55.276 "num_base_bdevs": 4, 00:10:55.276 "num_base_bdevs_discovered": 2, 00:10:55.276 "num_base_bdevs_operational": 4, 00:10:55.276 "base_bdevs_list": [ 00:10:55.276 { 00:10:55.276 "name": "BaseBdev1", 00:10:55.276 "uuid": "a68ece2b-8729-4750-b28a-edadfc7e0eba", 00:10:55.276 "is_configured": true, 00:10:55.276 "data_offset": 0, 00:10:55.276 "data_size": 65536 00:10:55.276 }, 00:10:55.276 { 00:10:55.276 "name": null, 00:10:55.276 "uuid": "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96", 00:10:55.276 "is_configured": false, 00:10:55.276 "data_offset": 0, 00:10:55.276 "data_size": 65536 00:10:55.276 }, 00:10:55.276 { 00:10:55.276 "name": null, 00:10:55.276 "uuid": "8e165036-2103-461a-aa3c-137f9dd12800", 00:10:55.276 "is_configured": false, 00:10:55.276 "data_offset": 0, 00:10:55.276 "data_size": 65536 00:10:55.276 }, 00:10:55.276 { 00:10:55.276 "name": "BaseBdev4", 00:10:55.276 "uuid": "645854e6-02d9-4dee-90e2-9eca353332a6", 00:10:55.276 "is_configured": true, 00:10:55.276 "data_offset": 0, 00:10:55.276 "data_size": 65536 00:10:55.276 } 00:10:55.276 ] 00:10:55.276 }' 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.276 02:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.536 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.537 [2024-11-28 02:26:29.187627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.537 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.797 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.797 "name": "Existed_Raid", 00:10:55.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.797 "strip_size_kb": 0, 00:10:55.797 "state": "configuring", 00:10:55.797 "raid_level": "raid1", 00:10:55.797 "superblock": false, 00:10:55.797 "num_base_bdevs": 4, 00:10:55.797 "num_base_bdevs_discovered": 3, 00:10:55.797 "num_base_bdevs_operational": 4, 00:10:55.797 "base_bdevs_list": [ 00:10:55.797 { 00:10:55.797 "name": "BaseBdev1", 00:10:55.797 "uuid": "a68ece2b-8729-4750-b28a-edadfc7e0eba", 00:10:55.797 "is_configured": true, 00:10:55.797 "data_offset": 0, 00:10:55.797 "data_size": 65536 00:10:55.797 }, 00:10:55.797 { 00:10:55.797 "name": null, 00:10:55.797 "uuid": "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96", 00:10:55.797 "is_configured": false, 00:10:55.797 "data_offset": 0, 00:10:55.797 "data_size": 65536 00:10:55.797 }, 00:10:55.797 { 00:10:55.797 "name": "BaseBdev3", 00:10:55.797 "uuid": "8e165036-2103-461a-aa3c-137f9dd12800", 00:10:55.797 "is_configured": true, 00:10:55.797 "data_offset": 0, 00:10:55.797 "data_size": 65536 00:10:55.797 }, 00:10:55.797 { 00:10:55.797 "name": "BaseBdev4", 00:10:55.797 "uuid": "645854e6-02d9-4dee-90e2-9eca353332a6", 00:10:55.797 "is_configured": true, 00:10:55.797 "data_offset": 0, 00:10:55.797 "data_size": 65536 00:10:55.797 } 00:10:55.797 ] 00:10:55.797 }' 00:10:55.797 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.797 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.056 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.056 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:56.056 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.057 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.057 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.057 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:56.057 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.057 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.057 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.057 [2024-11-28 02:26:29.686856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.317 "name": "Existed_Raid", 00:10:56.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.317 "strip_size_kb": 0, 00:10:56.317 "state": "configuring", 00:10:56.317 "raid_level": "raid1", 00:10:56.317 "superblock": false, 00:10:56.317 "num_base_bdevs": 4, 00:10:56.317 "num_base_bdevs_discovered": 2, 00:10:56.317 "num_base_bdevs_operational": 4, 00:10:56.317 "base_bdevs_list": [ 00:10:56.317 { 00:10:56.317 "name": null, 00:10:56.317 "uuid": "a68ece2b-8729-4750-b28a-edadfc7e0eba", 00:10:56.317 "is_configured": false, 00:10:56.317 "data_offset": 0, 00:10:56.317 "data_size": 65536 00:10:56.317 }, 00:10:56.317 { 00:10:56.317 "name": null, 00:10:56.317 "uuid": "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96", 00:10:56.317 "is_configured": false, 00:10:56.317 "data_offset": 0, 00:10:56.317 "data_size": 65536 00:10:56.317 }, 00:10:56.317 { 00:10:56.317 "name": "BaseBdev3", 00:10:56.317 "uuid": "8e165036-2103-461a-aa3c-137f9dd12800", 00:10:56.317 "is_configured": true, 00:10:56.317 "data_offset": 0, 00:10:56.317 "data_size": 65536 00:10:56.317 }, 00:10:56.317 { 00:10:56.317 "name": "BaseBdev4", 00:10:56.317 "uuid": "645854e6-02d9-4dee-90e2-9eca353332a6", 00:10:56.317 "is_configured": true, 00:10:56.317 "data_offset": 0, 00:10:56.317 "data_size": 65536 00:10:56.317 } 00:10:56.317 ] 00:10:56.317 }' 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.317 02:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.887 [2024-11-28 02:26:30.325327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.887 "name": "Existed_Raid", 00:10:56.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.887 "strip_size_kb": 0, 00:10:56.887 "state": "configuring", 00:10:56.887 "raid_level": "raid1", 00:10:56.887 "superblock": false, 00:10:56.887 "num_base_bdevs": 4, 00:10:56.887 "num_base_bdevs_discovered": 3, 00:10:56.887 "num_base_bdevs_operational": 4, 00:10:56.887 "base_bdevs_list": [ 00:10:56.887 { 00:10:56.887 "name": null, 00:10:56.887 "uuid": "a68ece2b-8729-4750-b28a-edadfc7e0eba", 00:10:56.887 "is_configured": false, 00:10:56.887 "data_offset": 0, 00:10:56.887 "data_size": 65536 00:10:56.887 }, 00:10:56.887 { 00:10:56.887 "name": "BaseBdev2", 00:10:56.887 "uuid": "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96", 00:10:56.887 "is_configured": true, 00:10:56.887 "data_offset": 0, 00:10:56.887 "data_size": 65536 00:10:56.887 }, 00:10:56.887 { 00:10:56.887 "name": "BaseBdev3", 00:10:56.887 "uuid": "8e165036-2103-461a-aa3c-137f9dd12800", 00:10:56.887 "is_configured": true, 00:10:56.887 "data_offset": 0, 00:10:56.887 "data_size": 65536 00:10:56.887 }, 00:10:56.887 { 00:10:56.887 "name": "BaseBdev4", 00:10:56.887 "uuid": "645854e6-02d9-4dee-90e2-9eca353332a6", 00:10:56.887 "is_configured": true, 00:10:56.887 "data_offset": 0, 00:10:56.887 "data_size": 65536 00:10:56.887 } 00:10:56.887 ] 00:10:56.887 }' 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.887 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.147 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a68ece2b-8729-4750-b28a-edadfc7e0eba 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.408 [2024-11-28 02:26:30.884759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:57.408 [2024-11-28 02:26:30.884880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:57.408 [2024-11-28 02:26:30.884913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:57.408 [2024-11-28 02:26:30.885271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:57.408 [2024-11-28 02:26:30.885528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:57.408 [2024-11-28 02:26:30.885578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:57.408 [2024-11-28 02:26:30.885891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.408 NewBaseBdev 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.408 [ 00:10:57.408 { 00:10:57.408 "name": "NewBaseBdev", 00:10:57.408 "aliases": [ 00:10:57.408 "a68ece2b-8729-4750-b28a-edadfc7e0eba" 00:10:57.408 ], 00:10:57.408 "product_name": "Malloc disk", 00:10:57.408 "block_size": 512, 00:10:57.408 "num_blocks": 65536, 00:10:57.408 "uuid": "a68ece2b-8729-4750-b28a-edadfc7e0eba", 00:10:57.408 "assigned_rate_limits": { 00:10:57.408 "rw_ios_per_sec": 0, 00:10:57.408 "rw_mbytes_per_sec": 0, 00:10:57.408 "r_mbytes_per_sec": 0, 00:10:57.408 "w_mbytes_per_sec": 0 00:10:57.408 }, 00:10:57.408 "claimed": true, 00:10:57.408 "claim_type": "exclusive_write", 00:10:57.408 "zoned": false, 00:10:57.408 "supported_io_types": { 00:10:57.408 "read": true, 00:10:57.408 "write": true, 00:10:57.408 "unmap": true, 00:10:57.408 "flush": true, 00:10:57.408 "reset": true, 00:10:57.408 "nvme_admin": false, 00:10:57.408 "nvme_io": false, 00:10:57.408 "nvme_io_md": false, 00:10:57.408 "write_zeroes": true, 00:10:57.408 "zcopy": true, 00:10:57.408 "get_zone_info": false, 00:10:57.408 "zone_management": false, 00:10:57.408 "zone_append": false, 00:10:57.408 "compare": false, 00:10:57.408 "compare_and_write": false, 00:10:57.408 "abort": true, 00:10:57.408 "seek_hole": false, 00:10:57.408 "seek_data": false, 00:10:57.408 "copy": true, 00:10:57.408 "nvme_iov_md": false 00:10:57.408 }, 00:10:57.408 "memory_domains": [ 00:10:57.408 { 00:10:57.408 "dma_device_id": "system", 00:10:57.408 "dma_device_type": 1 00:10:57.408 }, 00:10:57.408 { 00:10:57.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.408 "dma_device_type": 2 00:10:57.408 } 00:10:57.408 ], 00:10:57.408 "driver_specific": {} 00:10:57.408 } 00:10:57.408 ] 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.408 "name": "Existed_Raid", 00:10:57.408 "uuid": "8a925c29-7b72-4d6c-8b45-d8411b04a35d", 00:10:57.408 "strip_size_kb": 0, 00:10:57.408 "state": "online", 00:10:57.408 "raid_level": "raid1", 00:10:57.408 "superblock": false, 00:10:57.408 "num_base_bdevs": 4, 00:10:57.408 "num_base_bdevs_discovered": 4, 00:10:57.408 "num_base_bdevs_operational": 4, 00:10:57.408 "base_bdevs_list": [ 00:10:57.408 { 00:10:57.408 "name": "NewBaseBdev", 00:10:57.408 "uuid": "a68ece2b-8729-4750-b28a-edadfc7e0eba", 00:10:57.408 "is_configured": true, 00:10:57.408 "data_offset": 0, 00:10:57.408 "data_size": 65536 00:10:57.408 }, 00:10:57.408 { 00:10:57.408 "name": "BaseBdev2", 00:10:57.408 "uuid": "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96", 00:10:57.408 "is_configured": true, 00:10:57.408 "data_offset": 0, 00:10:57.408 "data_size": 65536 00:10:57.408 }, 00:10:57.408 { 00:10:57.408 "name": "BaseBdev3", 00:10:57.408 "uuid": "8e165036-2103-461a-aa3c-137f9dd12800", 00:10:57.408 "is_configured": true, 00:10:57.408 "data_offset": 0, 00:10:57.408 "data_size": 65536 00:10:57.408 }, 00:10:57.408 { 00:10:57.408 "name": "BaseBdev4", 00:10:57.408 "uuid": "645854e6-02d9-4dee-90e2-9eca353332a6", 00:10:57.408 "is_configured": true, 00:10:57.408 "data_offset": 0, 00:10:57.408 "data_size": 65536 00:10:57.408 } 00:10:57.408 ] 00:10:57.408 }' 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.408 02:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.979 [2024-11-28 02:26:31.416269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.979 "name": "Existed_Raid", 00:10:57.979 "aliases": [ 00:10:57.979 "8a925c29-7b72-4d6c-8b45-d8411b04a35d" 00:10:57.979 ], 00:10:57.979 "product_name": "Raid Volume", 00:10:57.979 "block_size": 512, 00:10:57.979 "num_blocks": 65536, 00:10:57.979 "uuid": "8a925c29-7b72-4d6c-8b45-d8411b04a35d", 00:10:57.979 "assigned_rate_limits": { 00:10:57.979 "rw_ios_per_sec": 0, 00:10:57.979 "rw_mbytes_per_sec": 0, 00:10:57.979 "r_mbytes_per_sec": 0, 00:10:57.979 "w_mbytes_per_sec": 0 00:10:57.979 }, 00:10:57.979 "claimed": false, 00:10:57.979 "zoned": false, 00:10:57.979 "supported_io_types": { 00:10:57.979 "read": true, 00:10:57.979 "write": true, 00:10:57.979 "unmap": false, 00:10:57.979 "flush": false, 00:10:57.979 "reset": true, 00:10:57.979 "nvme_admin": false, 00:10:57.979 "nvme_io": false, 00:10:57.979 "nvme_io_md": false, 00:10:57.979 "write_zeroes": true, 00:10:57.979 "zcopy": false, 00:10:57.979 "get_zone_info": false, 00:10:57.979 "zone_management": false, 00:10:57.979 "zone_append": false, 00:10:57.979 "compare": false, 00:10:57.979 "compare_and_write": false, 00:10:57.979 "abort": false, 00:10:57.979 "seek_hole": false, 00:10:57.979 "seek_data": false, 00:10:57.979 "copy": false, 00:10:57.979 "nvme_iov_md": false 00:10:57.979 }, 00:10:57.979 "memory_domains": [ 00:10:57.979 { 00:10:57.979 "dma_device_id": "system", 00:10:57.979 "dma_device_type": 1 00:10:57.979 }, 00:10:57.979 { 00:10:57.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.979 "dma_device_type": 2 00:10:57.979 }, 00:10:57.979 { 00:10:57.979 "dma_device_id": "system", 00:10:57.979 "dma_device_type": 1 00:10:57.979 }, 00:10:57.979 { 00:10:57.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.979 "dma_device_type": 2 00:10:57.979 }, 00:10:57.979 { 00:10:57.979 "dma_device_id": "system", 00:10:57.979 "dma_device_type": 1 00:10:57.979 }, 00:10:57.979 { 00:10:57.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.979 "dma_device_type": 2 00:10:57.979 }, 00:10:57.979 { 00:10:57.979 "dma_device_id": "system", 00:10:57.979 "dma_device_type": 1 00:10:57.979 }, 00:10:57.979 { 00:10:57.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.979 "dma_device_type": 2 00:10:57.979 } 00:10:57.979 ], 00:10:57.979 "driver_specific": { 00:10:57.979 "raid": { 00:10:57.979 "uuid": "8a925c29-7b72-4d6c-8b45-d8411b04a35d", 00:10:57.979 "strip_size_kb": 0, 00:10:57.979 "state": "online", 00:10:57.979 "raid_level": "raid1", 00:10:57.979 "superblock": false, 00:10:57.979 "num_base_bdevs": 4, 00:10:57.979 "num_base_bdevs_discovered": 4, 00:10:57.979 "num_base_bdevs_operational": 4, 00:10:57.979 "base_bdevs_list": [ 00:10:57.979 { 00:10:57.979 "name": "NewBaseBdev", 00:10:57.979 "uuid": "a68ece2b-8729-4750-b28a-edadfc7e0eba", 00:10:57.979 "is_configured": true, 00:10:57.979 "data_offset": 0, 00:10:57.979 "data_size": 65536 00:10:57.979 }, 00:10:57.979 { 00:10:57.979 "name": "BaseBdev2", 00:10:57.979 "uuid": "7b8ca5c3-692b-47cf-ac33-bc72b62b2b96", 00:10:57.979 "is_configured": true, 00:10:57.979 "data_offset": 0, 00:10:57.979 "data_size": 65536 00:10:57.979 }, 00:10:57.979 { 00:10:57.979 "name": "BaseBdev3", 00:10:57.979 "uuid": "8e165036-2103-461a-aa3c-137f9dd12800", 00:10:57.979 "is_configured": true, 00:10:57.979 "data_offset": 0, 00:10:57.979 "data_size": 65536 00:10:57.979 }, 00:10:57.979 { 00:10:57.979 "name": "BaseBdev4", 00:10:57.979 "uuid": "645854e6-02d9-4dee-90e2-9eca353332a6", 00:10:57.979 "is_configured": true, 00:10:57.979 "data_offset": 0, 00:10:57.979 "data_size": 65536 00:10:57.979 } 00:10:57.979 ] 00:10:57.979 } 00:10:57.979 } 00:10:57.979 }' 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:57.979 BaseBdev2 00:10:57.979 BaseBdev3 00:10:57.979 BaseBdev4' 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.979 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.238 [2024-11-28 02:26:31.759373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.238 [2024-11-28 02:26:31.759463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.238 [2024-11-28 02:26:31.759591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.238 [2024-11-28 02:26:31.759973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.238 [2024-11-28 02:26:31.760057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72964 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72964 ']' 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72964 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72964 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72964' 00:10:58.238 killing process with pid 72964 00:10:58.238 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72964 00:10:58.239 [2024-11-28 02:26:31.803333] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.239 02:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72964 00:10:58.808 [2024-11-28 02:26:32.194439] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:59.748 00:10:59.748 real 0m11.536s 00:10:59.748 user 0m18.310s 00:10:59.748 sys 0m2.014s 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.748 ************************************ 00:10:59.748 END TEST raid_state_function_test 00:10:59.748 ************************************ 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.748 02:26:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:59.748 02:26:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:59.748 02:26:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.748 02:26:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.748 ************************************ 00:10:59.748 START TEST raid_state_function_test_sb 00:10:59.748 ************************************ 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73635 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73635' 00:10:59.748 Process raid pid: 73635 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73635 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73635 ']' 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.748 02:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.008 [2024-11-28 02:26:33.485534] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:00.008 [2024-11-28 02:26:33.485723] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.008 [2024-11-28 02:26:33.663885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.268 [2024-11-28 02:26:33.776797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.528 [2024-11-28 02:26:33.984289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.528 [2024-11-28 02:26:33.984331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.788 [2024-11-28 02:26:34.322263] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.788 [2024-11-28 02:26:34.322390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.788 [2024-11-28 02:26:34.322407] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.788 [2024-11-28 02:26:34.322421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.788 [2024-11-28 02:26:34.322430] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.788 [2024-11-28 02:26:34.322443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.788 [2024-11-28 02:26:34.322452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.788 [2024-11-28 02:26:34.322464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.788 "name": "Existed_Raid", 00:11:00.788 "uuid": "03ff8c87-99a3-4da2-a809-c1dd2771784f", 00:11:00.788 "strip_size_kb": 0, 00:11:00.788 "state": "configuring", 00:11:00.788 "raid_level": "raid1", 00:11:00.788 "superblock": true, 00:11:00.788 "num_base_bdevs": 4, 00:11:00.788 "num_base_bdevs_discovered": 0, 00:11:00.788 "num_base_bdevs_operational": 4, 00:11:00.788 "base_bdevs_list": [ 00:11:00.788 { 00:11:00.788 "name": "BaseBdev1", 00:11:00.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.788 "is_configured": false, 00:11:00.788 "data_offset": 0, 00:11:00.788 "data_size": 0 00:11:00.788 }, 00:11:00.788 { 00:11:00.788 "name": "BaseBdev2", 00:11:00.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.788 "is_configured": false, 00:11:00.788 "data_offset": 0, 00:11:00.788 "data_size": 0 00:11:00.788 }, 00:11:00.788 { 00:11:00.788 "name": "BaseBdev3", 00:11:00.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.788 "is_configured": false, 00:11:00.788 "data_offset": 0, 00:11:00.788 "data_size": 0 00:11:00.788 }, 00:11:00.788 { 00:11:00.788 "name": "BaseBdev4", 00:11:00.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.788 "is_configured": false, 00:11:00.788 "data_offset": 0, 00:11:00.788 "data_size": 0 00:11:00.788 } 00:11:00.788 ] 00:11:00.788 }' 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.788 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.358 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.358 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.358 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.358 [2024-11-28 02:26:34.753491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.358 [2024-11-28 02:26:34.753604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:01.358 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.358 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.358 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.358 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.358 [2024-11-28 02:26:34.765455] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.358 [2024-11-28 02:26:34.765505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.358 [2024-11-28 02:26:34.765516] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.358 [2024-11-28 02:26:34.765527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.358 [2024-11-28 02:26:34.765535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.358 [2024-11-28 02:26:34.765548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.358 [2024-11-28 02:26:34.765555] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.358 [2024-11-28 02:26:34.765578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.358 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.358 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.359 [2024-11-28 02:26:34.814184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.359 BaseBdev1 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.359 [ 00:11:01.359 { 00:11:01.359 "name": "BaseBdev1", 00:11:01.359 "aliases": [ 00:11:01.359 "82ff3230-8c93-4d36-a33c-09e5afac1737" 00:11:01.359 ], 00:11:01.359 "product_name": "Malloc disk", 00:11:01.359 "block_size": 512, 00:11:01.359 "num_blocks": 65536, 00:11:01.359 "uuid": "82ff3230-8c93-4d36-a33c-09e5afac1737", 00:11:01.359 "assigned_rate_limits": { 00:11:01.359 "rw_ios_per_sec": 0, 00:11:01.359 "rw_mbytes_per_sec": 0, 00:11:01.359 "r_mbytes_per_sec": 0, 00:11:01.359 "w_mbytes_per_sec": 0 00:11:01.359 }, 00:11:01.359 "claimed": true, 00:11:01.359 "claim_type": "exclusive_write", 00:11:01.359 "zoned": false, 00:11:01.359 "supported_io_types": { 00:11:01.359 "read": true, 00:11:01.359 "write": true, 00:11:01.359 "unmap": true, 00:11:01.359 "flush": true, 00:11:01.359 "reset": true, 00:11:01.359 "nvme_admin": false, 00:11:01.359 "nvme_io": false, 00:11:01.359 "nvme_io_md": false, 00:11:01.359 "write_zeroes": true, 00:11:01.359 "zcopy": true, 00:11:01.359 "get_zone_info": false, 00:11:01.359 "zone_management": false, 00:11:01.359 "zone_append": false, 00:11:01.359 "compare": false, 00:11:01.359 "compare_and_write": false, 00:11:01.359 "abort": true, 00:11:01.359 "seek_hole": false, 00:11:01.359 "seek_data": false, 00:11:01.359 "copy": true, 00:11:01.359 "nvme_iov_md": false 00:11:01.359 }, 00:11:01.359 "memory_domains": [ 00:11:01.359 { 00:11:01.359 "dma_device_id": "system", 00:11:01.359 "dma_device_type": 1 00:11:01.359 }, 00:11:01.359 { 00:11:01.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.359 "dma_device_type": 2 00:11:01.359 } 00:11:01.359 ], 00:11:01.359 "driver_specific": {} 00:11:01.359 } 00:11:01.359 ] 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.359 "name": "Existed_Raid", 00:11:01.359 "uuid": "d5210622-5de3-4e79-8ad1-a47ba9882d00", 00:11:01.359 "strip_size_kb": 0, 00:11:01.359 "state": "configuring", 00:11:01.359 "raid_level": "raid1", 00:11:01.359 "superblock": true, 00:11:01.359 "num_base_bdevs": 4, 00:11:01.359 "num_base_bdevs_discovered": 1, 00:11:01.359 "num_base_bdevs_operational": 4, 00:11:01.359 "base_bdevs_list": [ 00:11:01.359 { 00:11:01.359 "name": "BaseBdev1", 00:11:01.359 "uuid": "82ff3230-8c93-4d36-a33c-09e5afac1737", 00:11:01.359 "is_configured": true, 00:11:01.359 "data_offset": 2048, 00:11:01.359 "data_size": 63488 00:11:01.359 }, 00:11:01.359 { 00:11:01.359 "name": "BaseBdev2", 00:11:01.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.359 "is_configured": false, 00:11:01.359 "data_offset": 0, 00:11:01.359 "data_size": 0 00:11:01.359 }, 00:11:01.359 { 00:11:01.359 "name": "BaseBdev3", 00:11:01.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.359 "is_configured": false, 00:11:01.359 "data_offset": 0, 00:11:01.359 "data_size": 0 00:11:01.359 }, 00:11:01.359 { 00:11:01.359 "name": "BaseBdev4", 00:11:01.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.359 "is_configured": false, 00:11:01.359 "data_offset": 0, 00:11:01.359 "data_size": 0 00:11:01.359 } 00:11:01.359 ] 00:11:01.359 }' 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.359 02:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.620 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.620 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.620 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.620 [2024-11-28 02:26:35.285523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.620 [2024-11-28 02:26:35.285647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:01.620 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.620 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.620 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.620 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.620 [2024-11-28 02:26:35.297549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.881 [2024-11-28 02:26:35.299413] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.881 [2024-11-28 02:26:35.299459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.881 [2024-11-28 02:26:35.299470] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.881 [2024-11-28 02:26:35.299482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.881 [2024-11-28 02:26:35.299490] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.881 [2024-11-28 02:26:35.299501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.881 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.881 "name": "Existed_Raid", 00:11:01.881 "uuid": "3df85da0-3d99-4d37-b8e3-3dcd05e84ea7", 00:11:01.881 "strip_size_kb": 0, 00:11:01.881 "state": "configuring", 00:11:01.881 "raid_level": "raid1", 00:11:01.881 "superblock": true, 00:11:01.881 "num_base_bdevs": 4, 00:11:01.881 "num_base_bdevs_discovered": 1, 00:11:01.881 "num_base_bdevs_operational": 4, 00:11:01.881 "base_bdevs_list": [ 00:11:01.881 { 00:11:01.882 "name": "BaseBdev1", 00:11:01.882 "uuid": "82ff3230-8c93-4d36-a33c-09e5afac1737", 00:11:01.882 "is_configured": true, 00:11:01.882 "data_offset": 2048, 00:11:01.882 "data_size": 63488 00:11:01.882 }, 00:11:01.882 { 00:11:01.882 "name": "BaseBdev2", 00:11:01.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.882 "is_configured": false, 00:11:01.882 "data_offset": 0, 00:11:01.882 "data_size": 0 00:11:01.882 }, 00:11:01.882 { 00:11:01.882 "name": "BaseBdev3", 00:11:01.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.882 "is_configured": false, 00:11:01.882 "data_offset": 0, 00:11:01.882 "data_size": 0 00:11:01.882 }, 00:11:01.882 { 00:11:01.882 "name": "BaseBdev4", 00:11:01.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.882 "is_configured": false, 00:11:01.882 "data_offset": 0, 00:11:01.882 "data_size": 0 00:11:01.882 } 00:11:01.882 ] 00:11:01.882 }' 00:11:01.882 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.882 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.142 [2024-11-28 02:26:35.757821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.142 BaseBdev2 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.142 [ 00:11:02.142 { 00:11:02.142 "name": "BaseBdev2", 00:11:02.142 "aliases": [ 00:11:02.142 "d0e07654-7683-4255-9297-2f0ac8a31cea" 00:11:02.142 ], 00:11:02.142 "product_name": "Malloc disk", 00:11:02.142 "block_size": 512, 00:11:02.142 "num_blocks": 65536, 00:11:02.142 "uuid": "d0e07654-7683-4255-9297-2f0ac8a31cea", 00:11:02.142 "assigned_rate_limits": { 00:11:02.142 "rw_ios_per_sec": 0, 00:11:02.142 "rw_mbytes_per_sec": 0, 00:11:02.142 "r_mbytes_per_sec": 0, 00:11:02.142 "w_mbytes_per_sec": 0 00:11:02.142 }, 00:11:02.142 "claimed": true, 00:11:02.142 "claim_type": "exclusive_write", 00:11:02.142 "zoned": false, 00:11:02.142 "supported_io_types": { 00:11:02.142 "read": true, 00:11:02.142 "write": true, 00:11:02.142 "unmap": true, 00:11:02.142 "flush": true, 00:11:02.142 "reset": true, 00:11:02.142 "nvme_admin": false, 00:11:02.142 "nvme_io": false, 00:11:02.142 "nvme_io_md": false, 00:11:02.142 "write_zeroes": true, 00:11:02.142 "zcopy": true, 00:11:02.142 "get_zone_info": false, 00:11:02.142 "zone_management": false, 00:11:02.142 "zone_append": false, 00:11:02.142 "compare": false, 00:11:02.142 "compare_and_write": false, 00:11:02.142 "abort": true, 00:11:02.142 "seek_hole": false, 00:11:02.142 "seek_data": false, 00:11:02.142 "copy": true, 00:11:02.142 "nvme_iov_md": false 00:11:02.142 }, 00:11:02.142 "memory_domains": [ 00:11:02.142 { 00:11:02.142 "dma_device_id": "system", 00:11:02.142 "dma_device_type": 1 00:11:02.142 }, 00:11:02.142 { 00:11:02.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.142 "dma_device_type": 2 00:11:02.142 } 00:11:02.142 ], 00:11:02.142 "driver_specific": {} 00:11:02.142 } 00:11:02.142 ] 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.142 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.407 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.407 "name": "Existed_Raid", 00:11:02.407 "uuid": "3df85da0-3d99-4d37-b8e3-3dcd05e84ea7", 00:11:02.407 "strip_size_kb": 0, 00:11:02.407 "state": "configuring", 00:11:02.407 "raid_level": "raid1", 00:11:02.407 "superblock": true, 00:11:02.407 "num_base_bdevs": 4, 00:11:02.407 "num_base_bdevs_discovered": 2, 00:11:02.407 "num_base_bdevs_operational": 4, 00:11:02.407 "base_bdevs_list": [ 00:11:02.407 { 00:11:02.407 "name": "BaseBdev1", 00:11:02.407 "uuid": "82ff3230-8c93-4d36-a33c-09e5afac1737", 00:11:02.407 "is_configured": true, 00:11:02.407 "data_offset": 2048, 00:11:02.407 "data_size": 63488 00:11:02.407 }, 00:11:02.407 { 00:11:02.407 "name": "BaseBdev2", 00:11:02.407 "uuid": "d0e07654-7683-4255-9297-2f0ac8a31cea", 00:11:02.407 "is_configured": true, 00:11:02.407 "data_offset": 2048, 00:11:02.407 "data_size": 63488 00:11:02.407 }, 00:11:02.407 { 00:11:02.407 "name": "BaseBdev3", 00:11:02.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.407 "is_configured": false, 00:11:02.407 "data_offset": 0, 00:11:02.407 "data_size": 0 00:11:02.407 }, 00:11:02.407 { 00:11:02.407 "name": "BaseBdev4", 00:11:02.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.407 "is_configured": false, 00:11:02.407 "data_offset": 0, 00:11:02.407 "data_size": 0 00:11:02.407 } 00:11:02.407 ] 00:11:02.407 }' 00:11:02.407 02:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.407 02:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.666 [2024-11-28 02:26:36.319834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.666 BaseBdev3 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:02.666 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.667 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.926 [ 00:11:02.926 { 00:11:02.926 "name": "BaseBdev3", 00:11:02.926 "aliases": [ 00:11:02.926 "f831ea46-acc6-4c73-b183-c13571832c42" 00:11:02.926 ], 00:11:02.926 "product_name": "Malloc disk", 00:11:02.926 "block_size": 512, 00:11:02.926 "num_blocks": 65536, 00:11:02.926 "uuid": "f831ea46-acc6-4c73-b183-c13571832c42", 00:11:02.926 "assigned_rate_limits": { 00:11:02.926 "rw_ios_per_sec": 0, 00:11:02.926 "rw_mbytes_per_sec": 0, 00:11:02.926 "r_mbytes_per_sec": 0, 00:11:02.926 "w_mbytes_per_sec": 0 00:11:02.926 }, 00:11:02.926 "claimed": true, 00:11:02.926 "claim_type": "exclusive_write", 00:11:02.926 "zoned": false, 00:11:02.926 "supported_io_types": { 00:11:02.926 "read": true, 00:11:02.926 "write": true, 00:11:02.926 "unmap": true, 00:11:02.926 "flush": true, 00:11:02.926 "reset": true, 00:11:02.926 "nvme_admin": false, 00:11:02.926 "nvme_io": false, 00:11:02.926 "nvme_io_md": false, 00:11:02.926 "write_zeroes": true, 00:11:02.926 "zcopy": true, 00:11:02.926 "get_zone_info": false, 00:11:02.926 "zone_management": false, 00:11:02.926 "zone_append": false, 00:11:02.926 "compare": false, 00:11:02.926 "compare_and_write": false, 00:11:02.926 "abort": true, 00:11:02.926 "seek_hole": false, 00:11:02.926 "seek_data": false, 00:11:02.926 "copy": true, 00:11:02.926 "nvme_iov_md": false 00:11:02.926 }, 00:11:02.926 "memory_domains": [ 00:11:02.926 { 00:11:02.926 "dma_device_id": "system", 00:11:02.926 "dma_device_type": 1 00:11:02.926 }, 00:11:02.926 { 00:11:02.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.926 "dma_device_type": 2 00:11:02.926 } 00:11:02.926 ], 00:11:02.926 "driver_specific": {} 00:11:02.926 } 00:11:02.926 ] 00:11:02.926 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.926 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.926 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.926 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.926 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.927 "name": "Existed_Raid", 00:11:02.927 "uuid": "3df85da0-3d99-4d37-b8e3-3dcd05e84ea7", 00:11:02.927 "strip_size_kb": 0, 00:11:02.927 "state": "configuring", 00:11:02.927 "raid_level": "raid1", 00:11:02.927 "superblock": true, 00:11:02.927 "num_base_bdevs": 4, 00:11:02.927 "num_base_bdevs_discovered": 3, 00:11:02.927 "num_base_bdevs_operational": 4, 00:11:02.927 "base_bdevs_list": [ 00:11:02.927 { 00:11:02.927 "name": "BaseBdev1", 00:11:02.927 "uuid": "82ff3230-8c93-4d36-a33c-09e5afac1737", 00:11:02.927 "is_configured": true, 00:11:02.927 "data_offset": 2048, 00:11:02.927 "data_size": 63488 00:11:02.927 }, 00:11:02.927 { 00:11:02.927 "name": "BaseBdev2", 00:11:02.927 "uuid": "d0e07654-7683-4255-9297-2f0ac8a31cea", 00:11:02.927 "is_configured": true, 00:11:02.927 "data_offset": 2048, 00:11:02.927 "data_size": 63488 00:11:02.927 }, 00:11:02.927 { 00:11:02.927 "name": "BaseBdev3", 00:11:02.927 "uuid": "f831ea46-acc6-4c73-b183-c13571832c42", 00:11:02.927 "is_configured": true, 00:11:02.927 "data_offset": 2048, 00:11:02.927 "data_size": 63488 00:11:02.927 }, 00:11:02.927 { 00:11:02.927 "name": "BaseBdev4", 00:11:02.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.927 "is_configured": false, 00:11:02.927 "data_offset": 0, 00:11:02.927 "data_size": 0 00:11:02.927 } 00:11:02.927 ] 00:11:02.927 }' 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.927 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.186 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:03.186 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.186 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.447 [2024-11-28 02:26:36.885188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.447 [2024-11-28 02:26:36.885477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:03.447 [2024-11-28 02:26:36.885495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:03.447 [2024-11-28 02:26:36.885778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:03.447 BaseBdev4 00:11:03.447 [2024-11-28 02:26:36.885981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:03.447 [2024-11-28 02:26:36.885999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:03.447 [2024-11-28 02:26:36.886152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.447 [ 00:11:03.447 { 00:11:03.447 "name": "BaseBdev4", 00:11:03.447 "aliases": [ 00:11:03.447 "201f99e8-39d7-441e-88b8-88423124002a" 00:11:03.447 ], 00:11:03.447 "product_name": "Malloc disk", 00:11:03.447 "block_size": 512, 00:11:03.447 "num_blocks": 65536, 00:11:03.447 "uuid": "201f99e8-39d7-441e-88b8-88423124002a", 00:11:03.447 "assigned_rate_limits": { 00:11:03.447 "rw_ios_per_sec": 0, 00:11:03.447 "rw_mbytes_per_sec": 0, 00:11:03.447 "r_mbytes_per_sec": 0, 00:11:03.447 "w_mbytes_per_sec": 0 00:11:03.447 }, 00:11:03.447 "claimed": true, 00:11:03.447 "claim_type": "exclusive_write", 00:11:03.447 "zoned": false, 00:11:03.447 "supported_io_types": { 00:11:03.447 "read": true, 00:11:03.447 "write": true, 00:11:03.447 "unmap": true, 00:11:03.447 "flush": true, 00:11:03.447 "reset": true, 00:11:03.447 "nvme_admin": false, 00:11:03.447 "nvme_io": false, 00:11:03.447 "nvme_io_md": false, 00:11:03.447 "write_zeroes": true, 00:11:03.447 "zcopy": true, 00:11:03.447 "get_zone_info": false, 00:11:03.447 "zone_management": false, 00:11:03.447 "zone_append": false, 00:11:03.447 "compare": false, 00:11:03.447 "compare_and_write": false, 00:11:03.447 "abort": true, 00:11:03.447 "seek_hole": false, 00:11:03.447 "seek_data": false, 00:11:03.447 "copy": true, 00:11:03.447 "nvme_iov_md": false 00:11:03.447 }, 00:11:03.447 "memory_domains": [ 00:11:03.447 { 00:11:03.447 "dma_device_id": "system", 00:11:03.447 "dma_device_type": 1 00:11:03.447 }, 00:11:03.447 { 00:11:03.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.447 "dma_device_type": 2 00:11:03.447 } 00:11:03.447 ], 00:11:03.447 "driver_specific": {} 00:11:03.447 } 00:11:03.447 ] 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.447 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.448 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.448 "name": "Existed_Raid", 00:11:03.448 "uuid": "3df85da0-3d99-4d37-b8e3-3dcd05e84ea7", 00:11:03.448 "strip_size_kb": 0, 00:11:03.448 "state": "online", 00:11:03.448 "raid_level": "raid1", 00:11:03.448 "superblock": true, 00:11:03.448 "num_base_bdevs": 4, 00:11:03.448 "num_base_bdevs_discovered": 4, 00:11:03.448 "num_base_bdevs_operational": 4, 00:11:03.448 "base_bdevs_list": [ 00:11:03.448 { 00:11:03.448 "name": "BaseBdev1", 00:11:03.448 "uuid": "82ff3230-8c93-4d36-a33c-09e5afac1737", 00:11:03.448 "is_configured": true, 00:11:03.448 "data_offset": 2048, 00:11:03.448 "data_size": 63488 00:11:03.448 }, 00:11:03.448 { 00:11:03.448 "name": "BaseBdev2", 00:11:03.448 "uuid": "d0e07654-7683-4255-9297-2f0ac8a31cea", 00:11:03.448 "is_configured": true, 00:11:03.448 "data_offset": 2048, 00:11:03.448 "data_size": 63488 00:11:03.448 }, 00:11:03.448 { 00:11:03.448 "name": "BaseBdev3", 00:11:03.448 "uuid": "f831ea46-acc6-4c73-b183-c13571832c42", 00:11:03.448 "is_configured": true, 00:11:03.448 "data_offset": 2048, 00:11:03.448 "data_size": 63488 00:11:03.448 }, 00:11:03.448 { 00:11:03.448 "name": "BaseBdev4", 00:11:03.448 "uuid": "201f99e8-39d7-441e-88b8-88423124002a", 00:11:03.448 "is_configured": true, 00:11:03.448 "data_offset": 2048, 00:11:03.448 "data_size": 63488 00:11:03.448 } 00:11:03.448 ] 00:11:03.448 }' 00:11:03.448 02:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.448 02:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.708 [2024-11-28 02:26:37.360784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.708 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.969 "name": "Existed_Raid", 00:11:03.969 "aliases": [ 00:11:03.969 "3df85da0-3d99-4d37-b8e3-3dcd05e84ea7" 00:11:03.969 ], 00:11:03.969 "product_name": "Raid Volume", 00:11:03.969 "block_size": 512, 00:11:03.969 "num_blocks": 63488, 00:11:03.969 "uuid": "3df85da0-3d99-4d37-b8e3-3dcd05e84ea7", 00:11:03.969 "assigned_rate_limits": { 00:11:03.969 "rw_ios_per_sec": 0, 00:11:03.969 "rw_mbytes_per_sec": 0, 00:11:03.969 "r_mbytes_per_sec": 0, 00:11:03.969 "w_mbytes_per_sec": 0 00:11:03.969 }, 00:11:03.969 "claimed": false, 00:11:03.969 "zoned": false, 00:11:03.969 "supported_io_types": { 00:11:03.969 "read": true, 00:11:03.969 "write": true, 00:11:03.969 "unmap": false, 00:11:03.969 "flush": false, 00:11:03.969 "reset": true, 00:11:03.969 "nvme_admin": false, 00:11:03.969 "nvme_io": false, 00:11:03.969 "nvme_io_md": false, 00:11:03.969 "write_zeroes": true, 00:11:03.969 "zcopy": false, 00:11:03.969 "get_zone_info": false, 00:11:03.969 "zone_management": false, 00:11:03.969 "zone_append": false, 00:11:03.969 "compare": false, 00:11:03.969 "compare_and_write": false, 00:11:03.969 "abort": false, 00:11:03.969 "seek_hole": false, 00:11:03.969 "seek_data": false, 00:11:03.969 "copy": false, 00:11:03.969 "nvme_iov_md": false 00:11:03.969 }, 00:11:03.969 "memory_domains": [ 00:11:03.969 { 00:11:03.969 "dma_device_id": "system", 00:11:03.969 "dma_device_type": 1 00:11:03.969 }, 00:11:03.969 { 00:11:03.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.969 "dma_device_type": 2 00:11:03.969 }, 00:11:03.969 { 00:11:03.969 "dma_device_id": "system", 00:11:03.969 "dma_device_type": 1 00:11:03.969 }, 00:11:03.969 { 00:11:03.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.969 "dma_device_type": 2 00:11:03.969 }, 00:11:03.969 { 00:11:03.969 "dma_device_id": "system", 00:11:03.969 "dma_device_type": 1 00:11:03.969 }, 00:11:03.969 { 00:11:03.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.969 "dma_device_type": 2 00:11:03.969 }, 00:11:03.969 { 00:11:03.969 "dma_device_id": "system", 00:11:03.969 "dma_device_type": 1 00:11:03.969 }, 00:11:03.969 { 00:11:03.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.969 "dma_device_type": 2 00:11:03.969 } 00:11:03.969 ], 00:11:03.969 "driver_specific": { 00:11:03.969 "raid": { 00:11:03.969 "uuid": "3df85da0-3d99-4d37-b8e3-3dcd05e84ea7", 00:11:03.969 "strip_size_kb": 0, 00:11:03.969 "state": "online", 00:11:03.969 "raid_level": "raid1", 00:11:03.969 "superblock": true, 00:11:03.969 "num_base_bdevs": 4, 00:11:03.969 "num_base_bdevs_discovered": 4, 00:11:03.969 "num_base_bdevs_operational": 4, 00:11:03.969 "base_bdevs_list": [ 00:11:03.969 { 00:11:03.969 "name": "BaseBdev1", 00:11:03.969 "uuid": "82ff3230-8c93-4d36-a33c-09e5afac1737", 00:11:03.969 "is_configured": true, 00:11:03.969 "data_offset": 2048, 00:11:03.969 "data_size": 63488 00:11:03.969 }, 00:11:03.969 { 00:11:03.969 "name": "BaseBdev2", 00:11:03.969 "uuid": "d0e07654-7683-4255-9297-2f0ac8a31cea", 00:11:03.969 "is_configured": true, 00:11:03.969 "data_offset": 2048, 00:11:03.969 "data_size": 63488 00:11:03.969 }, 00:11:03.969 { 00:11:03.969 "name": "BaseBdev3", 00:11:03.969 "uuid": "f831ea46-acc6-4c73-b183-c13571832c42", 00:11:03.969 "is_configured": true, 00:11:03.969 "data_offset": 2048, 00:11:03.969 "data_size": 63488 00:11:03.969 }, 00:11:03.969 { 00:11:03.969 "name": "BaseBdev4", 00:11:03.969 "uuid": "201f99e8-39d7-441e-88b8-88423124002a", 00:11:03.969 "is_configured": true, 00:11:03.969 "data_offset": 2048, 00:11:03.969 "data_size": 63488 00:11:03.969 } 00:11:03.969 ] 00:11:03.969 } 00:11:03.969 } 00:11:03.969 }' 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:03.969 BaseBdev2 00:11:03.969 BaseBdev3 00:11:03.969 BaseBdev4' 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.969 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.970 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:03.970 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.970 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.970 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.970 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.970 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.970 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.970 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.970 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.230 [2024-11-28 02:26:37.680028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.230 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.231 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.231 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.231 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.231 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.231 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.231 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.231 "name": "Existed_Raid", 00:11:04.231 "uuid": "3df85da0-3d99-4d37-b8e3-3dcd05e84ea7", 00:11:04.231 "strip_size_kb": 0, 00:11:04.231 "state": "online", 00:11:04.231 "raid_level": "raid1", 00:11:04.231 "superblock": true, 00:11:04.231 "num_base_bdevs": 4, 00:11:04.231 "num_base_bdevs_discovered": 3, 00:11:04.231 "num_base_bdevs_operational": 3, 00:11:04.231 "base_bdevs_list": [ 00:11:04.231 { 00:11:04.231 "name": null, 00:11:04.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.231 "is_configured": false, 00:11:04.231 "data_offset": 0, 00:11:04.231 "data_size": 63488 00:11:04.231 }, 00:11:04.231 { 00:11:04.231 "name": "BaseBdev2", 00:11:04.231 "uuid": "d0e07654-7683-4255-9297-2f0ac8a31cea", 00:11:04.231 "is_configured": true, 00:11:04.231 "data_offset": 2048, 00:11:04.231 "data_size": 63488 00:11:04.231 }, 00:11:04.231 { 00:11:04.231 "name": "BaseBdev3", 00:11:04.231 "uuid": "f831ea46-acc6-4c73-b183-c13571832c42", 00:11:04.231 "is_configured": true, 00:11:04.231 "data_offset": 2048, 00:11:04.231 "data_size": 63488 00:11:04.231 }, 00:11:04.231 { 00:11:04.231 "name": "BaseBdev4", 00:11:04.231 "uuid": "201f99e8-39d7-441e-88b8-88423124002a", 00:11:04.231 "is_configured": true, 00:11:04.231 "data_offset": 2048, 00:11:04.231 "data_size": 63488 00:11:04.231 } 00:11:04.231 ] 00:11:04.231 }' 00:11:04.231 02:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.231 02:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.802 [2024-11-28 02:26:38.301535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.802 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.802 [2024-11-28 02:26:38.455915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:05.063 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.063 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.064 [2024-11-28 02:26:38.590661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:05.064 [2024-11-28 02:26:38.590774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.064 [2024-11-28 02:26:38.684581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.064 [2024-11-28 02:26:38.684636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.064 [2024-11-28 02:26:38.684650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.064 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.326 BaseBdev2 00:11:05.326 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.326 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:05.326 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:05.326 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.326 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.326 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.327 [ 00:11:05.327 { 00:11:05.327 "name": "BaseBdev2", 00:11:05.327 "aliases": [ 00:11:05.327 "8e37b700-03cc-4a6e-aa67-8ae47a7e6641" 00:11:05.327 ], 00:11:05.327 "product_name": "Malloc disk", 00:11:05.327 "block_size": 512, 00:11:05.327 "num_blocks": 65536, 00:11:05.327 "uuid": "8e37b700-03cc-4a6e-aa67-8ae47a7e6641", 00:11:05.327 "assigned_rate_limits": { 00:11:05.327 "rw_ios_per_sec": 0, 00:11:05.327 "rw_mbytes_per_sec": 0, 00:11:05.327 "r_mbytes_per_sec": 0, 00:11:05.327 "w_mbytes_per_sec": 0 00:11:05.327 }, 00:11:05.327 "claimed": false, 00:11:05.327 "zoned": false, 00:11:05.327 "supported_io_types": { 00:11:05.327 "read": true, 00:11:05.327 "write": true, 00:11:05.327 "unmap": true, 00:11:05.327 "flush": true, 00:11:05.327 "reset": true, 00:11:05.327 "nvme_admin": false, 00:11:05.327 "nvme_io": false, 00:11:05.327 "nvme_io_md": false, 00:11:05.327 "write_zeroes": true, 00:11:05.327 "zcopy": true, 00:11:05.327 "get_zone_info": false, 00:11:05.327 "zone_management": false, 00:11:05.327 "zone_append": false, 00:11:05.327 "compare": false, 00:11:05.327 "compare_and_write": false, 00:11:05.327 "abort": true, 00:11:05.327 "seek_hole": false, 00:11:05.327 "seek_data": false, 00:11:05.327 "copy": true, 00:11:05.327 "nvme_iov_md": false 00:11:05.327 }, 00:11:05.327 "memory_domains": [ 00:11:05.327 { 00:11:05.327 "dma_device_id": "system", 00:11:05.327 "dma_device_type": 1 00:11:05.327 }, 00:11:05.327 { 00:11:05.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.327 "dma_device_type": 2 00:11:05.327 } 00:11:05.327 ], 00:11:05.327 "driver_specific": {} 00:11:05.327 } 00:11:05.327 ] 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.327 BaseBdev3 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.327 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.327 [ 00:11:05.327 { 00:11:05.327 "name": "BaseBdev3", 00:11:05.327 "aliases": [ 00:11:05.327 "16543f03-593a-4513-ae2a-3bd481ece85c" 00:11:05.327 ], 00:11:05.327 "product_name": "Malloc disk", 00:11:05.327 "block_size": 512, 00:11:05.327 "num_blocks": 65536, 00:11:05.327 "uuid": "16543f03-593a-4513-ae2a-3bd481ece85c", 00:11:05.327 "assigned_rate_limits": { 00:11:05.327 "rw_ios_per_sec": 0, 00:11:05.327 "rw_mbytes_per_sec": 0, 00:11:05.327 "r_mbytes_per_sec": 0, 00:11:05.327 "w_mbytes_per_sec": 0 00:11:05.327 }, 00:11:05.327 "claimed": false, 00:11:05.327 "zoned": false, 00:11:05.327 "supported_io_types": { 00:11:05.327 "read": true, 00:11:05.327 "write": true, 00:11:05.327 "unmap": true, 00:11:05.327 "flush": true, 00:11:05.327 "reset": true, 00:11:05.327 "nvme_admin": false, 00:11:05.327 "nvme_io": false, 00:11:05.327 "nvme_io_md": false, 00:11:05.327 "write_zeroes": true, 00:11:05.327 "zcopy": true, 00:11:05.327 "get_zone_info": false, 00:11:05.328 "zone_management": false, 00:11:05.328 "zone_append": false, 00:11:05.328 "compare": false, 00:11:05.328 "compare_and_write": false, 00:11:05.328 "abort": true, 00:11:05.328 "seek_hole": false, 00:11:05.328 "seek_data": false, 00:11:05.328 "copy": true, 00:11:05.328 "nvme_iov_md": false 00:11:05.328 }, 00:11:05.328 "memory_domains": [ 00:11:05.328 { 00:11:05.328 "dma_device_id": "system", 00:11:05.328 "dma_device_type": 1 00:11:05.328 }, 00:11:05.328 { 00:11:05.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.328 "dma_device_type": 2 00:11:05.328 } 00:11:05.328 ], 00:11:05.328 "driver_specific": {} 00:11:05.328 } 00:11:05.328 ] 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.328 BaseBdev4 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.328 [ 00:11:05.328 { 00:11:05.328 "name": "BaseBdev4", 00:11:05.328 "aliases": [ 00:11:05.328 "e7ca592d-1ec4-44a6-b9e5-27001cecad34" 00:11:05.328 ], 00:11:05.328 "product_name": "Malloc disk", 00:11:05.328 "block_size": 512, 00:11:05.328 "num_blocks": 65536, 00:11:05.328 "uuid": "e7ca592d-1ec4-44a6-b9e5-27001cecad34", 00:11:05.328 "assigned_rate_limits": { 00:11:05.328 "rw_ios_per_sec": 0, 00:11:05.328 "rw_mbytes_per_sec": 0, 00:11:05.328 "r_mbytes_per_sec": 0, 00:11:05.328 "w_mbytes_per_sec": 0 00:11:05.328 }, 00:11:05.328 "claimed": false, 00:11:05.328 "zoned": false, 00:11:05.328 "supported_io_types": { 00:11:05.328 "read": true, 00:11:05.328 "write": true, 00:11:05.328 "unmap": true, 00:11:05.328 "flush": true, 00:11:05.328 "reset": true, 00:11:05.328 "nvme_admin": false, 00:11:05.328 "nvme_io": false, 00:11:05.328 "nvme_io_md": false, 00:11:05.328 "write_zeroes": true, 00:11:05.328 "zcopy": true, 00:11:05.328 "get_zone_info": false, 00:11:05.328 "zone_management": false, 00:11:05.328 "zone_append": false, 00:11:05.328 "compare": false, 00:11:05.328 "compare_and_write": false, 00:11:05.328 "abort": true, 00:11:05.328 "seek_hole": false, 00:11:05.328 "seek_data": false, 00:11:05.328 "copy": true, 00:11:05.328 "nvme_iov_md": false 00:11:05.328 }, 00:11:05.328 "memory_domains": [ 00:11:05.328 { 00:11:05.328 "dma_device_id": "system", 00:11:05.328 "dma_device_type": 1 00:11:05.328 }, 00:11:05.328 { 00:11:05.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.328 "dma_device_type": 2 00:11:05.328 } 00:11:05.328 ], 00:11:05.328 "driver_specific": {} 00:11:05.328 } 00:11:05.328 ] 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.328 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.328 [2024-11-28 02:26:38.984732] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.328 [2024-11-28 02:26:38.984786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.328 [2024-11-28 02:26:38.984809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.328 [2024-11-28 02:26:38.986704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.329 [2024-11-28 02:26:38.986820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.329 02:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.590 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.590 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.590 "name": "Existed_Raid", 00:11:05.590 "uuid": "5f4fa353-5242-45c4-84b8-027a97067262", 00:11:05.590 "strip_size_kb": 0, 00:11:05.590 "state": "configuring", 00:11:05.590 "raid_level": "raid1", 00:11:05.590 "superblock": true, 00:11:05.590 "num_base_bdevs": 4, 00:11:05.590 "num_base_bdevs_discovered": 3, 00:11:05.590 "num_base_bdevs_operational": 4, 00:11:05.590 "base_bdevs_list": [ 00:11:05.590 { 00:11:05.590 "name": "BaseBdev1", 00:11:05.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.590 "is_configured": false, 00:11:05.590 "data_offset": 0, 00:11:05.590 "data_size": 0 00:11:05.590 }, 00:11:05.590 { 00:11:05.590 "name": "BaseBdev2", 00:11:05.590 "uuid": "8e37b700-03cc-4a6e-aa67-8ae47a7e6641", 00:11:05.590 "is_configured": true, 00:11:05.590 "data_offset": 2048, 00:11:05.590 "data_size": 63488 00:11:05.590 }, 00:11:05.590 { 00:11:05.590 "name": "BaseBdev3", 00:11:05.590 "uuid": "16543f03-593a-4513-ae2a-3bd481ece85c", 00:11:05.590 "is_configured": true, 00:11:05.590 "data_offset": 2048, 00:11:05.590 "data_size": 63488 00:11:05.590 }, 00:11:05.590 { 00:11:05.590 "name": "BaseBdev4", 00:11:05.590 "uuid": "e7ca592d-1ec4-44a6-b9e5-27001cecad34", 00:11:05.590 "is_configured": true, 00:11:05.590 "data_offset": 2048, 00:11:05.590 "data_size": 63488 00:11:05.590 } 00:11:05.590 ] 00:11:05.590 }' 00:11:05.590 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.590 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.850 [2024-11-28 02:26:39.448009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.850 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.850 "name": "Existed_Raid", 00:11:05.850 "uuid": "5f4fa353-5242-45c4-84b8-027a97067262", 00:11:05.850 "strip_size_kb": 0, 00:11:05.850 "state": "configuring", 00:11:05.851 "raid_level": "raid1", 00:11:05.851 "superblock": true, 00:11:05.851 "num_base_bdevs": 4, 00:11:05.851 "num_base_bdevs_discovered": 2, 00:11:05.851 "num_base_bdevs_operational": 4, 00:11:05.851 "base_bdevs_list": [ 00:11:05.851 { 00:11:05.851 "name": "BaseBdev1", 00:11:05.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.851 "is_configured": false, 00:11:05.851 "data_offset": 0, 00:11:05.851 "data_size": 0 00:11:05.851 }, 00:11:05.851 { 00:11:05.851 "name": null, 00:11:05.851 "uuid": "8e37b700-03cc-4a6e-aa67-8ae47a7e6641", 00:11:05.851 "is_configured": false, 00:11:05.851 "data_offset": 0, 00:11:05.851 "data_size": 63488 00:11:05.851 }, 00:11:05.851 { 00:11:05.851 "name": "BaseBdev3", 00:11:05.851 "uuid": "16543f03-593a-4513-ae2a-3bd481ece85c", 00:11:05.851 "is_configured": true, 00:11:05.851 "data_offset": 2048, 00:11:05.851 "data_size": 63488 00:11:05.851 }, 00:11:05.851 { 00:11:05.851 "name": "BaseBdev4", 00:11:05.851 "uuid": "e7ca592d-1ec4-44a6-b9e5-27001cecad34", 00:11:05.851 "is_configured": true, 00:11:05.851 "data_offset": 2048, 00:11:05.851 "data_size": 63488 00:11:05.851 } 00:11:05.851 ] 00:11:05.851 }' 00:11:05.851 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.851 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 [2024-11-28 02:26:39.957230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.422 BaseBdev1 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 [ 00:11:06.422 { 00:11:06.422 "name": "BaseBdev1", 00:11:06.422 "aliases": [ 00:11:06.422 "4ac3d770-1741-450c-909a-ef498491c440" 00:11:06.422 ], 00:11:06.422 "product_name": "Malloc disk", 00:11:06.422 "block_size": 512, 00:11:06.422 "num_blocks": 65536, 00:11:06.422 "uuid": "4ac3d770-1741-450c-909a-ef498491c440", 00:11:06.422 "assigned_rate_limits": { 00:11:06.422 "rw_ios_per_sec": 0, 00:11:06.422 "rw_mbytes_per_sec": 0, 00:11:06.422 "r_mbytes_per_sec": 0, 00:11:06.422 "w_mbytes_per_sec": 0 00:11:06.422 }, 00:11:06.422 "claimed": true, 00:11:06.422 "claim_type": "exclusive_write", 00:11:06.422 "zoned": false, 00:11:06.422 "supported_io_types": { 00:11:06.422 "read": true, 00:11:06.422 "write": true, 00:11:06.422 "unmap": true, 00:11:06.422 "flush": true, 00:11:06.422 "reset": true, 00:11:06.422 "nvme_admin": false, 00:11:06.422 "nvme_io": false, 00:11:06.422 "nvme_io_md": false, 00:11:06.422 "write_zeroes": true, 00:11:06.422 "zcopy": true, 00:11:06.422 "get_zone_info": false, 00:11:06.422 "zone_management": false, 00:11:06.422 "zone_append": false, 00:11:06.422 "compare": false, 00:11:06.422 "compare_and_write": false, 00:11:06.422 "abort": true, 00:11:06.422 "seek_hole": false, 00:11:06.422 "seek_data": false, 00:11:06.422 "copy": true, 00:11:06.422 "nvme_iov_md": false 00:11:06.422 }, 00:11:06.422 "memory_domains": [ 00:11:06.422 { 00:11:06.422 "dma_device_id": "system", 00:11:06.422 "dma_device_type": 1 00:11:06.422 }, 00:11:06.422 { 00:11:06.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.422 "dma_device_type": 2 00:11:06.422 } 00:11:06.422 ], 00:11:06.422 "driver_specific": {} 00:11:06.422 } 00:11:06.422 ] 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 02:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.422 "name": "Existed_Raid", 00:11:06.422 "uuid": "5f4fa353-5242-45c4-84b8-027a97067262", 00:11:06.422 "strip_size_kb": 0, 00:11:06.422 "state": "configuring", 00:11:06.422 "raid_level": "raid1", 00:11:06.422 "superblock": true, 00:11:06.422 "num_base_bdevs": 4, 00:11:06.422 "num_base_bdevs_discovered": 3, 00:11:06.422 "num_base_bdevs_operational": 4, 00:11:06.422 "base_bdevs_list": [ 00:11:06.422 { 00:11:06.422 "name": "BaseBdev1", 00:11:06.422 "uuid": "4ac3d770-1741-450c-909a-ef498491c440", 00:11:06.422 "is_configured": true, 00:11:06.422 "data_offset": 2048, 00:11:06.422 "data_size": 63488 00:11:06.422 }, 00:11:06.422 { 00:11:06.422 "name": null, 00:11:06.423 "uuid": "8e37b700-03cc-4a6e-aa67-8ae47a7e6641", 00:11:06.423 "is_configured": false, 00:11:06.423 "data_offset": 0, 00:11:06.423 "data_size": 63488 00:11:06.423 }, 00:11:06.423 { 00:11:06.423 "name": "BaseBdev3", 00:11:06.423 "uuid": "16543f03-593a-4513-ae2a-3bd481ece85c", 00:11:06.423 "is_configured": true, 00:11:06.423 "data_offset": 2048, 00:11:06.423 "data_size": 63488 00:11:06.423 }, 00:11:06.423 { 00:11:06.423 "name": "BaseBdev4", 00:11:06.423 "uuid": "e7ca592d-1ec4-44a6-b9e5-27001cecad34", 00:11:06.423 "is_configured": true, 00:11:06.423 "data_offset": 2048, 00:11:06.423 "data_size": 63488 00:11:06.423 } 00:11:06.423 ] 00:11:06.423 }' 00:11:06.423 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.423 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.996 [2024-11-28 02:26:40.416623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.996 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.996 "name": "Existed_Raid", 00:11:06.996 "uuid": "5f4fa353-5242-45c4-84b8-027a97067262", 00:11:06.996 "strip_size_kb": 0, 00:11:06.996 "state": "configuring", 00:11:06.996 "raid_level": "raid1", 00:11:06.996 "superblock": true, 00:11:06.996 "num_base_bdevs": 4, 00:11:06.996 "num_base_bdevs_discovered": 2, 00:11:06.996 "num_base_bdevs_operational": 4, 00:11:06.996 "base_bdevs_list": [ 00:11:06.996 { 00:11:06.996 "name": "BaseBdev1", 00:11:06.996 "uuid": "4ac3d770-1741-450c-909a-ef498491c440", 00:11:06.996 "is_configured": true, 00:11:06.996 "data_offset": 2048, 00:11:06.996 "data_size": 63488 00:11:06.996 }, 00:11:06.996 { 00:11:06.996 "name": null, 00:11:06.996 "uuid": "8e37b700-03cc-4a6e-aa67-8ae47a7e6641", 00:11:06.996 "is_configured": false, 00:11:06.996 "data_offset": 0, 00:11:06.996 "data_size": 63488 00:11:06.996 }, 00:11:06.996 { 00:11:06.996 "name": null, 00:11:06.996 "uuid": "16543f03-593a-4513-ae2a-3bd481ece85c", 00:11:06.996 "is_configured": false, 00:11:06.996 "data_offset": 0, 00:11:06.996 "data_size": 63488 00:11:06.996 }, 00:11:06.996 { 00:11:06.996 "name": "BaseBdev4", 00:11:06.996 "uuid": "e7ca592d-1ec4-44a6-b9e5-27001cecad34", 00:11:06.996 "is_configured": true, 00:11:06.996 "data_offset": 2048, 00:11:06.996 "data_size": 63488 00:11:06.996 } 00:11:06.996 ] 00:11:06.996 }' 00:11:06.997 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.997 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.257 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.257 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.257 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.257 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.257 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.257 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:07.257 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.258 [2024-11-28 02:26:40.923803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.258 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.519 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.519 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.519 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.519 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.519 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.519 "name": "Existed_Raid", 00:11:07.519 "uuid": "5f4fa353-5242-45c4-84b8-027a97067262", 00:11:07.519 "strip_size_kb": 0, 00:11:07.519 "state": "configuring", 00:11:07.519 "raid_level": "raid1", 00:11:07.519 "superblock": true, 00:11:07.519 "num_base_bdevs": 4, 00:11:07.519 "num_base_bdevs_discovered": 3, 00:11:07.519 "num_base_bdevs_operational": 4, 00:11:07.519 "base_bdevs_list": [ 00:11:07.519 { 00:11:07.519 "name": "BaseBdev1", 00:11:07.519 "uuid": "4ac3d770-1741-450c-909a-ef498491c440", 00:11:07.519 "is_configured": true, 00:11:07.519 "data_offset": 2048, 00:11:07.519 "data_size": 63488 00:11:07.519 }, 00:11:07.519 { 00:11:07.519 "name": null, 00:11:07.519 "uuid": "8e37b700-03cc-4a6e-aa67-8ae47a7e6641", 00:11:07.519 "is_configured": false, 00:11:07.519 "data_offset": 0, 00:11:07.519 "data_size": 63488 00:11:07.519 }, 00:11:07.519 { 00:11:07.519 "name": "BaseBdev3", 00:11:07.519 "uuid": "16543f03-593a-4513-ae2a-3bd481ece85c", 00:11:07.519 "is_configured": true, 00:11:07.519 "data_offset": 2048, 00:11:07.519 "data_size": 63488 00:11:07.519 }, 00:11:07.519 { 00:11:07.519 "name": "BaseBdev4", 00:11:07.519 "uuid": "e7ca592d-1ec4-44a6-b9e5-27001cecad34", 00:11:07.519 "is_configured": true, 00:11:07.519 "data_offset": 2048, 00:11:07.519 "data_size": 63488 00:11:07.519 } 00:11:07.519 ] 00:11:07.519 }' 00:11:07.519 02:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.519 02:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.780 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.780 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.780 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.780 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.780 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.780 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:07.780 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.780 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.780 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.780 [2024-11-28 02:26:41.439143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.070 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.070 "name": "Existed_Raid", 00:11:08.070 "uuid": "5f4fa353-5242-45c4-84b8-027a97067262", 00:11:08.070 "strip_size_kb": 0, 00:11:08.070 "state": "configuring", 00:11:08.070 "raid_level": "raid1", 00:11:08.070 "superblock": true, 00:11:08.070 "num_base_bdevs": 4, 00:11:08.070 "num_base_bdevs_discovered": 2, 00:11:08.070 "num_base_bdevs_operational": 4, 00:11:08.070 "base_bdevs_list": [ 00:11:08.070 { 00:11:08.070 "name": null, 00:11:08.070 "uuid": "4ac3d770-1741-450c-909a-ef498491c440", 00:11:08.070 "is_configured": false, 00:11:08.070 "data_offset": 0, 00:11:08.070 "data_size": 63488 00:11:08.070 }, 00:11:08.070 { 00:11:08.070 "name": null, 00:11:08.070 "uuid": "8e37b700-03cc-4a6e-aa67-8ae47a7e6641", 00:11:08.070 "is_configured": false, 00:11:08.070 "data_offset": 0, 00:11:08.070 "data_size": 63488 00:11:08.070 }, 00:11:08.070 { 00:11:08.070 "name": "BaseBdev3", 00:11:08.070 "uuid": "16543f03-593a-4513-ae2a-3bd481ece85c", 00:11:08.070 "is_configured": true, 00:11:08.070 "data_offset": 2048, 00:11:08.070 "data_size": 63488 00:11:08.070 }, 00:11:08.070 { 00:11:08.070 "name": "BaseBdev4", 00:11:08.070 "uuid": "e7ca592d-1ec4-44a6-b9e5-27001cecad34", 00:11:08.070 "is_configured": true, 00:11:08.070 "data_offset": 2048, 00:11:08.070 "data_size": 63488 00:11:08.070 } 00:11:08.070 ] 00:11:08.071 }' 00:11:08.071 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.071 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.330 [2024-11-28 02:26:41.991423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.330 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.331 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.331 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.331 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.331 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.331 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.331 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.331 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.331 02:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.331 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.331 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.331 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.331 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.590 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.590 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.590 "name": "Existed_Raid", 00:11:08.590 "uuid": "5f4fa353-5242-45c4-84b8-027a97067262", 00:11:08.590 "strip_size_kb": 0, 00:11:08.590 "state": "configuring", 00:11:08.590 "raid_level": "raid1", 00:11:08.590 "superblock": true, 00:11:08.590 "num_base_bdevs": 4, 00:11:08.590 "num_base_bdevs_discovered": 3, 00:11:08.590 "num_base_bdevs_operational": 4, 00:11:08.590 "base_bdevs_list": [ 00:11:08.590 { 00:11:08.590 "name": null, 00:11:08.590 "uuid": "4ac3d770-1741-450c-909a-ef498491c440", 00:11:08.590 "is_configured": false, 00:11:08.590 "data_offset": 0, 00:11:08.590 "data_size": 63488 00:11:08.590 }, 00:11:08.590 { 00:11:08.590 "name": "BaseBdev2", 00:11:08.590 "uuid": "8e37b700-03cc-4a6e-aa67-8ae47a7e6641", 00:11:08.590 "is_configured": true, 00:11:08.590 "data_offset": 2048, 00:11:08.590 "data_size": 63488 00:11:08.590 }, 00:11:08.590 { 00:11:08.590 "name": "BaseBdev3", 00:11:08.590 "uuid": "16543f03-593a-4513-ae2a-3bd481ece85c", 00:11:08.590 "is_configured": true, 00:11:08.590 "data_offset": 2048, 00:11:08.590 "data_size": 63488 00:11:08.590 }, 00:11:08.590 { 00:11:08.590 "name": "BaseBdev4", 00:11:08.590 "uuid": "e7ca592d-1ec4-44a6-b9e5-27001cecad34", 00:11:08.590 "is_configured": true, 00:11:08.590 "data_offset": 2048, 00:11:08.590 "data_size": 63488 00:11:08.590 } 00:11:08.590 ] 00:11:08.590 }' 00:11:08.590 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.590 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4ac3d770-1741-450c-909a-ef498491c440 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.852 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 [2024-11-28 02:26:42.567161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:09.113 [2024-11-28 02:26:42.567421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:09.113 [2024-11-28 02:26:42.567440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.113 [2024-11-28 02:26:42.567730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:09.113 NewBaseBdev 00:11:09.113 [2024-11-28 02:26:42.567909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:09.113 [2024-11-28 02:26:42.567921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:09.113 [2024-11-28 02:26:42.568105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 [ 00:11:09.113 { 00:11:09.113 "name": "NewBaseBdev", 00:11:09.113 "aliases": [ 00:11:09.113 "4ac3d770-1741-450c-909a-ef498491c440" 00:11:09.113 ], 00:11:09.113 "product_name": "Malloc disk", 00:11:09.113 "block_size": 512, 00:11:09.113 "num_blocks": 65536, 00:11:09.113 "uuid": "4ac3d770-1741-450c-909a-ef498491c440", 00:11:09.113 "assigned_rate_limits": { 00:11:09.113 "rw_ios_per_sec": 0, 00:11:09.113 "rw_mbytes_per_sec": 0, 00:11:09.113 "r_mbytes_per_sec": 0, 00:11:09.113 "w_mbytes_per_sec": 0 00:11:09.113 }, 00:11:09.113 "claimed": true, 00:11:09.113 "claim_type": "exclusive_write", 00:11:09.113 "zoned": false, 00:11:09.113 "supported_io_types": { 00:11:09.113 "read": true, 00:11:09.113 "write": true, 00:11:09.113 "unmap": true, 00:11:09.113 "flush": true, 00:11:09.113 "reset": true, 00:11:09.113 "nvme_admin": false, 00:11:09.113 "nvme_io": false, 00:11:09.113 "nvme_io_md": false, 00:11:09.113 "write_zeroes": true, 00:11:09.113 "zcopy": true, 00:11:09.113 "get_zone_info": false, 00:11:09.113 "zone_management": false, 00:11:09.113 "zone_append": false, 00:11:09.113 "compare": false, 00:11:09.113 "compare_and_write": false, 00:11:09.113 "abort": true, 00:11:09.113 "seek_hole": false, 00:11:09.113 "seek_data": false, 00:11:09.113 "copy": true, 00:11:09.113 "nvme_iov_md": false 00:11:09.113 }, 00:11:09.113 "memory_domains": [ 00:11:09.113 { 00:11:09.113 "dma_device_id": "system", 00:11:09.113 "dma_device_type": 1 00:11:09.113 }, 00:11:09.113 { 00:11:09.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.113 "dma_device_type": 2 00:11:09.113 } 00:11:09.113 ], 00:11:09.113 "driver_specific": {} 00:11:09.113 } 00:11:09.113 ] 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.113 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.114 "name": "Existed_Raid", 00:11:09.114 "uuid": "5f4fa353-5242-45c4-84b8-027a97067262", 00:11:09.114 "strip_size_kb": 0, 00:11:09.114 "state": "online", 00:11:09.114 "raid_level": "raid1", 00:11:09.114 "superblock": true, 00:11:09.114 "num_base_bdevs": 4, 00:11:09.114 "num_base_bdevs_discovered": 4, 00:11:09.114 "num_base_bdevs_operational": 4, 00:11:09.114 "base_bdevs_list": [ 00:11:09.114 { 00:11:09.114 "name": "NewBaseBdev", 00:11:09.114 "uuid": "4ac3d770-1741-450c-909a-ef498491c440", 00:11:09.114 "is_configured": true, 00:11:09.114 "data_offset": 2048, 00:11:09.114 "data_size": 63488 00:11:09.114 }, 00:11:09.114 { 00:11:09.114 "name": "BaseBdev2", 00:11:09.114 "uuid": "8e37b700-03cc-4a6e-aa67-8ae47a7e6641", 00:11:09.114 "is_configured": true, 00:11:09.114 "data_offset": 2048, 00:11:09.114 "data_size": 63488 00:11:09.114 }, 00:11:09.114 { 00:11:09.114 "name": "BaseBdev3", 00:11:09.114 "uuid": "16543f03-593a-4513-ae2a-3bd481ece85c", 00:11:09.114 "is_configured": true, 00:11:09.114 "data_offset": 2048, 00:11:09.114 "data_size": 63488 00:11:09.114 }, 00:11:09.114 { 00:11:09.114 "name": "BaseBdev4", 00:11:09.114 "uuid": "e7ca592d-1ec4-44a6-b9e5-27001cecad34", 00:11:09.114 "is_configured": true, 00:11:09.114 "data_offset": 2048, 00:11:09.114 "data_size": 63488 00:11:09.114 } 00:11:09.114 ] 00:11:09.114 }' 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.114 02:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.375 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:09.375 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:09.375 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.375 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.375 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.375 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.375 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:09.375 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.375 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.375 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.635 [2024-11-28 02:26:43.054841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.635 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.635 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.635 "name": "Existed_Raid", 00:11:09.635 "aliases": [ 00:11:09.635 "5f4fa353-5242-45c4-84b8-027a97067262" 00:11:09.635 ], 00:11:09.635 "product_name": "Raid Volume", 00:11:09.635 "block_size": 512, 00:11:09.635 "num_blocks": 63488, 00:11:09.635 "uuid": "5f4fa353-5242-45c4-84b8-027a97067262", 00:11:09.635 "assigned_rate_limits": { 00:11:09.635 "rw_ios_per_sec": 0, 00:11:09.635 "rw_mbytes_per_sec": 0, 00:11:09.635 "r_mbytes_per_sec": 0, 00:11:09.635 "w_mbytes_per_sec": 0 00:11:09.635 }, 00:11:09.635 "claimed": false, 00:11:09.635 "zoned": false, 00:11:09.635 "supported_io_types": { 00:11:09.635 "read": true, 00:11:09.635 "write": true, 00:11:09.635 "unmap": false, 00:11:09.635 "flush": false, 00:11:09.635 "reset": true, 00:11:09.635 "nvme_admin": false, 00:11:09.635 "nvme_io": false, 00:11:09.635 "nvme_io_md": false, 00:11:09.635 "write_zeroes": true, 00:11:09.635 "zcopy": false, 00:11:09.635 "get_zone_info": false, 00:11:09.635 "zone_management": false, 00:11:09.635 "zone_append": false, 00:11:09.635 "compare": false, 00:11:09.635 "compare_and_write": false, 00:11:09.635 "abort": false, 00:11:09.635 "seek_hole": false, 00:11:09.635 "seek_data": false, 00:11:09.635 "copy": false, 00:11:09.635 "nvme_iov_md": false 00:11:09.635 }, 00:11:09.635 "memory_domains": [ 00:11:09.635 { 00:11:09.635 "dma_device_id": "system", 00:11:09.635 "dma_device_type": 1 00:11:09.635 }, 00:11:09.635 { 00:11:09.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.635 "dma_device_type": 2 00:11:09.635 }, 00:11:09.635 { 00:11:09.635 "dma_device_id": "system", 00:11:09.635 "dma_device_type": 1 00:11:09.635 }, 00:11:09.635 { 00:11:09.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.635 "dma_device_type": 2 00:11:09.635 }, 00:11:09.635 { 00:11:09.635 "dma_device_id": "system", 00:11:09.635 "dma_device_type": 1 00:11:09.635 }, 00:11:09.635 { 00:11:09.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.635 "dma_device_type": 2 00:11:09.635 }, 00:11:09.635 { 00:11:09.635 "dma_device_id": "system", 00:11:09.635 "dma_device_type": 1 00:11:09.635 }, 00:11:09.635 { 00:11:09.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.636 "dma_device_type": 2 00:11:09.636 } 00:11:09.636 ], 00:11:09.636 "driver_specific": { 00:11:09.636 "raid": { 00:11:09.636 "uuid": "5f4fa353-5242-45c4-84b8-027a97067262", 00:11:09.636 "strip_size_kb": 0, 00:11:09.636 "state": "online", 00:11:09.636 "raid_level": "raid1", 00:11:09.636 "superblock": true, 00:11:09.636 "num_base_bdevs": 4, 00:11:09.636 "num_base_bdevs_discovered": 4, 00:11:09.636 "num_base_bdevs_operational": 4, 00:11:09.636 "base_bdevs_list": [ 00:11:09.636 { 00:11:09.636 "name": "NewBaseBdev", 00:11:09.636 "uuid": "4ac3d770-1741-450c-909a-ef498491c440", 00:11:09.636 "is_configured": true, 00:11:09.636 "data_offset": 2048, 00:11:09.636 "data_size": 63488 00:11:09.636 }, 00:11:09.636 { 00:11:09.636 "name": "BaseBdev2", 00:11:09.636 "uuid": "8e37b700-03cc-4a6e-aa67-8ae47a7e6641", 00:11:09.636 "is_configured": true, 00:11:09.636 "data_offset": 2048, 00:11:09.636 "data_size": 63488 00:11:09.636 }, 00:11:09.636 { 00:11:09.636 "name": "BaseBdev3", 00:11:09.636 "uuid": "16543f03-593a-4513-ae2a-3bd481ece85c", 00:11:09.636 "is_configured": true, 00:11:09.636 "data_offset": 2048, 00:11:09.636 "data_size": 63488 00:11:09.636 }, 00:11:09.636 { 00:11:09.636 "name": "BaseBdev4", 00:11:09.636 "uuid": "e7ca592d-1ec4-44a6-b9e5-27001cecad34", 00:11:09.636 "is_configured": true, 00:11:09.636 "data_offset": 2048, 00:11:09.636 "data_size": 63488 00:11:09.636 } 00:11:09.636 ] 00:11:09.636 } 00:11:09.636 } 00:11:09.636 }' 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:09.636 BaseBdev2 00:11:09.636 BaseBdev3 00:11:09.636 BaseBdev4' 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.636 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.896 [2024-11-28 02:26:43.401863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.896 [2024-11-28 02:26:43.401967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.896 [2024-11-28 02:26:43.402083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.896 [2024-11-28 02:26:43.402381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.896 [2024-11-28 02:26:43.402454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73635 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73635 ']' 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73635 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.896 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73635 00:11:09.897 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.897 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.897 killing process with pid 73635 00:11:09.897 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73635' 00:11:09.897 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73635 00:11:09.897 [2024-11-28 02:26:43.449498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.897 02:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73635 00:11:10.156 [2024-11-28 02:26:43.833160] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.537 02:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:11.537 00:11:11.537 real 0m11.538s 00:11:11.537 user 0m18.280s 00:11:11.537 sys 0m2.121s 00:11:11.537 02:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.537 ************************************ 00:11:11.537 END TEST raid_state_function_test_sb 00:11:11.537 ************************************ 00:11:11.537 02:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.537 02:26:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:11.537 02:26:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:11.537 02:26:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.537 02:26:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.537 ************************************ 00:11:11.537 START TEST raid_superblock_test 00:11:11.537 ************************************ 00:11:11.537 02:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:11.537 02:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:11.537 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74300 00:11:11.538 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:11.538 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74300 00:11:11.538 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74300 ']' 00:11:11.538 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.538 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.538 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.538 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.538 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.538 [2024-11-28 02:26:45.091178] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:11.538 [2024-11-28 02:26:45.091372] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74300 ] 00:11:11.797 [2024-11-28 02:26:45.262517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.797 [2024-11-28 02:26:45.375388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.057 [2024-11-28 02:26:45.582157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.057 [2024-11-28 02:26:45.582296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.318 malloc1 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.318 [2024-11-28 02:26:45.957875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:12.318 [2024-11-28 02:26:45.957955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.318 [2024-11-28 02:26:45.957981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:12.318 [2024-11-28 02:26:45.957993] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.318 [2024-11-28 02:26:45.960079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.318 [2024-11-28 02:26:45.960126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:12.318 pt1 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.318 02:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.579 malloc2 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.579 [2024-11-28 02:26:46.013568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:12.579 [2024-11-28 02:26:46.013679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.579 [2024-11-28 02:26:46.013728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:12.579 [2024-11-28 02:26:46.013767] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.579 [2024-11-28 02:26:46.015818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.579 [2024-11-28 02:26:46.015904] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:12.579 pt2 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.579 malloc3 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.579 [2024-11-28 02:26:46.085350] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:12.579 [2024-11-28 02:26:46.085456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.579 [2024-11-28 02:26:46.085501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:12.579 [2024-11-28 02:26:46.085536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.579 [2024-11-28 02:26:46.087584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.579 [2024-11-28 02:26:46.087681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:12.579 pt3 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.579 malloc4 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.579 [2024-11-28 02:26:46.146913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:12.579 [2024-11-28 02:26:46.146994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.579 [2024-11-28 02:26:46.147019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:12.579 [2024-11-28 02:26:46.147031] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.579 [2024-11-28 02:26:46.149215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.579 [2024-11-28 02:26:46.149258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:12.579 pt4 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.579 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.579 [2024-11-28 02:26:46.158947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:12.579 [2024-11-28 02:26:46.160768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:12.579 [2024-11-28 02:26:46.160833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:12.580 [2024-11-28 02:26:46.160904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:12.580 [2024-11-28 02:26:46.161212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:12.580 [2024-11-28 02:26:46.161275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.580 [2024-11-28 02:26:46.161579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:12.580 [2024-11-28 02:26:46.161815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:12.580 [2024-11-28 02:26:46.161876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:12.580 [2024-11-28 02:26:46.162114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.580 "name": "raid_bdev1", 00:11:12.580 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:12.580 "strip_size_kb": 0, 00:11:12.580 "state": "online", 00:11:12.580 "raid_level": "raid1", 00:11:12.580 "superblock": true, 00:11:12.580 "num_base_bdevs": 4, 00:11:12.580 "num_base_bdevs_discovered": 4, 00:11:12.580 "num_base_bdevs_operational": 4, 00:11:12.580 "base_bdevs_list": [ 00:11:12.580 { 00:11:12.580 "name": "pt1", 00:11:12.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.580 "is_configured": true, 00:11:12.580 "data_offset": 2048, 00:11:12.580 "data_size": 63488 00:11:12.580 }, 00:11:12.580 { 00:11:12.580 "name": "pt2", 00:11:12.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.580 "is_configured": true, 00:11:12.580 "data_offset": 2048, 00:11:12.580 "data_size": 63488 00:11:12.580 }, 00:11:12.580 { 00:11:12.580 "name": "pt3", 00:11:12.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.580 "is_configured": true, 00:11:12.580 "data_offset": 2048, 00:11:12.580 "data_size": 63488 00:11:12.580 }, 00:11:12.580 { 00:11:12.580 "name": "pt4", 00:11:12.580 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:12.580 "is_configured": true, 00:11:12.580 "data_offset": 2048, 00:11:12.580 "data_size": 63488 00:11:12.580 } 00:11:12.580 ] 00:11:12.580 }' 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.580 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.148 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:13.148 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.149 [2024-11-28 02:26:46.602457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.149 "name": "raid_bdev1", 00:11:13.149 "aliases": [ 00:11:13.149 "d86a1bb2-9451-4ddc-911e-4665677bd2e1" 00:11:13.149 ], 00:11:13.149 "product_name": "Raid Volume", 00:11:13.149 "block_size": 512, 00:11:13.149 "num_blocks": 63488, 00:11:13.149 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:13.149 "assigned_rate_limits": { 00:11:13.149 "rw_ios_per_sec": 0, 00:11:13.149 "rw_mbytes_per_sec": 0, 00:11:13.149 "r_mbytes_per_sec": 0, 00:11:13.149 "w_mbytes_per_sec": 0 00:11:13.149 }, 00:11:13.149 "claimed": false, 00:11:13.149 "zoned": false, 00:11:13.149 "supported_io_types": { 00:11:13.149 "read": true, 00:11:13.149 "write": true, 00:11:13.149 "unmap": false, 00:11:13.149 "flush": false, 00:11:13.149 "reset": true, 00:11:13.149 "nvme_admin": false, 00:11:13.149 "nvme_io": false, 00:11:13.149 "nvme_io_md": false, 00:11:13.149 "write_zeroes": true, 00:11:13.149 "zcopy": false, 00:11:13.149 "get_zone_info": false, 00:11:13.149 "zone_management": false, 00:11:13.149 "zone_append": false, 00:11:13.149 "compare": false, 00:11:13.149 "compare_and_write": false, 00:11:13.149 "abort": false, 00:11:13.149 "seek_hole": false, 00:11:13.149 "seek_data": false, 00:11:13.149 "copy": false, 00:11:13.149 "nvme_iov_md": false 00:11:13.149 }, 00:11:13.149 "memory_domains": [ 00:11:13.149 { 00:11:13.149 "dma_device_id": "system", 00:11:13.149 "dma_device_type": 1 00:11:13.149 }, 00:11:13.149 { 00:11:13.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.149 "dma_device_type": 2 00:11:13.149 }, 00:11:13.149 { 00:11:13.149 "dma_device_id": "system", 00:11:13.149 "dma_device_type": 1 00:11:13.149 }, 00:11:13.149 { 00:11:13.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.149 "dma_device_type": 2 00:11:13.149 }, 00:11:13.149 { 00:11:13.149 "dma_device_id": "system", 00:11:13.149 "dma_device_type": 1 00:11:13.149 }, 00:11:13.149 { 00:11:13.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.149 "dma_device_type": 2 00:11:13.149 }, 00:11:13.149 { 00:11:13.149 "dma_device_id": "system", 00:11:13.149 "dma_device_type": 1 00:11:13.149 }, 00:11:13.149 { 00:11:13.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.149 "dma_device_type": 2 00:11:13.149 } 00:11:13.149 ], 00:11:13.149 "driver_specific": { 00:11:13.149 "raid": { 00:11:13.149 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:13.149 "strip_size_kb": 0, 00:11:13.149 "state": "online", 00:11:13.149 "raid_level": "raid1", 00:11:13.149 "superblock": true, 00:11:13.149 "num_base_bdevs": 4, 00:11:13.149 "num_base_bdevs_discovered": 4, 00:11:13.149 "num_base_bdevs_operational": 4, 00:11:13.149 "base_bdevs_list": [ 00:11:13.149 { 00:11:13.149 "name": "pt1", 00:11:13.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.149 "is_configured": true, 00:11:13.149 "data_offset": 2048, 00:11:13.149 "data_size": 63488 00:11:13.149 }, 00:11:13.149 { 00:11:13.149 "name": "pt2", 00:11:13.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.149 "is_configured": true, 00:11:13.149 "data_offset": 2048, 00:11:13.149 "data_size": 63488 00:11:13.149 }, 00:11:13.149 { 00:11:13.149 "name": "pt3", 00:11:13.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:13.149 "is_configured": true, 00:11:13.149 "data_offset": 2048, 00:11:13.149 "data_size": 63488 00:11:13.149 }, 00:11:13.149 { 00:11:13.149 "name": "pt4", 00:11:13.149 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:13.149 "is_configured": true, 00:11:13.149 "data_offset": 2048, 00:11:13.149 "data_size": 63488 00:11:13.149 } 00:11:13.149 ] 00:11:13.149 } 00:11:13.149 } 00:11:13.149 }' 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:13.149 pt2 00:11:13.149 pt3 00:11:13.149 pt4' 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.149 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.150 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.150 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.150 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.150 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:13.150 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.150 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.150 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.409 [2024-11-28 02:26:46.925856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d86a1bb2-9451-4ddc-911e-4665677bd2e1 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d86a1bb2-9451-4ddc-911e-4665677bd2e1 ']' 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.409 [2024-11-28 02:26:46.961498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.409 [2024-11-28 02:26:46.961575] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.409 [2024-11-28 02:26:46.961685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.409 [2024-11-28 02:26:46.961806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.409 [2024-11-28 02:26:46.961877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.409 02:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.409 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.410 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:13.410 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:13.410 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.410 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.669 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.669 [2024-11-28 02:26:47.125237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:13.669 [2024-11-28 02:26:47.127084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:13.669 [2024-11-28 02:26:47.127138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:13.669 [2024-11-28 02:26:47.127177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:13.669 [2024-11-28 02:26:47.127231] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:13.669 [2024-11-28 02:26:47.127287] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:13.669 [2024-11-28 02:26:47.127309] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:13.669 [2024-11-28 02:26:47.127331] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:13.669 [2024-11-28 02:26:47.127346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.669 [2024-11-28 02:26:47.127359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:13.670 request: 00:11:13.670 { 00:11:13.670 "name": "raid_bdev1", 00:11:13.670 "raid_level": "raid1", 00:11:13.670 "base_bdevs": [ 00:11:13.670 "malloc1", 00:11:13.670 "malloc2", 00:11:13.670 "malloc3", 00:11:13.670 "malloc4" 00:11:13.670 ], 00:11:13.670 "superblock": false, 00:11:13.670 "method": "bdev_raid_create", 00:11:13.670 "req_id": 1 00:11:13.670 } 00:11:13.670 Got JSON-RPC error response 00:11:13.670 response: 00:11:13.670 { 00:11:13.670 "code": -17, 00:11:13.670 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:13.670 } 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.670 [2024-11-28 02:26:47.193110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:13.670 [2024-11-28 02:26:47.193234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.670 [2024-11-28 02:26:47.193281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:13.670 [2024-11-28 02:26:47.193329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.670 [2024-11-28 02:26:47.195800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.670 [2024-11-28 02:26:47.195903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:13.670 [2024-11-28 02:26:47.196052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:13.670 [2024-11-28 02:26:47.196172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:13.670 pt1 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.670 "name": "raid_bdev1", 00:11:13.670 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:13.670 "strip_size_kb": 0, 00:11:13.670 "state": "configuring", 00:11:13.670 "raid_level": "raid1", 00:11:13.670 "superblock": true, 00:11:13.670 "num_base_bdevs": 4, 00:11:13.670 "num_base_bdevs_discovered": 1, 00:11:13.670 "num_base_bdevs_operational": 4, 00:11:13.670 "base_bdevs_list": [ 00:11:13.670 { 00:11:13.670 "name": "pt1", 00:11:13.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.670 "is_configured": true, 00:11:13.670 "data_offset": 2048, 00:11:13.670 "data_size": 63488 00:11:13.670 }, 00:11:13.670 { 00:11:13.670 "name": null, 00:11:13.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.670 "is_configured": false, 00:11:13.670 "data_offset": 2048, 00:11:13.670 "data_size": 63488 00:11:13.670 }, 00:11:13.670 { 00:11:13.670 "name": null, 00:11:13.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:13.670 "is_configured": false, 00:11:13.670 "data_offset": 2048, 00:11:13.670 "data_size": 63488 00:11:13.670 }, 00:11:13.670 { 00:11:13.670 "name": null, 00:11:13.670 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:13.670 "is_configured": false, 00:11:13.670 "data_offset": 2048, 00:11:13.670 "data_size": 63488 00:11:13.670 } 00:11:13.670 ] 00:11:13.670 }' 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.670 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.259 [2024-11-28 02:26:47.648431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:14.259 [2024-11-28 02:26:47.648584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.259 [2024-11-28 02:26:47.648614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:14.259 [2024-11-28 02:26:47.648627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.259 [2024-11-28 02:26:47.649106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.259 [2024-11-28 02:26:47.649131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:14.259 [2024-11-28 02:26:47.649226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:14.259 [2024-11-28 02:26:47.649257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:14.259 pt2 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.259 [2024-11-28 02:26:47.660399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.259 "name": "raid_bdev1", 00:11:14.259 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:14.259 "strip_size_kb": 0, 00:11:14.259 "state": "configuring", 00:11:14.259 "raid_level": "raid1", 00:11:14.259 "superblock": true, 00:11:14.259 "num_base_bdevs": 4, 00:11:14.259 "num_base_bdevs_discovered": 1, 00:11:14.259 "num_base_bdevs_operational": 4, 00:11:14.259 "base_bdevs_list": [ 00:11:14.259 { 00:11:14.259 "name": "pt1", 00:11:14.259 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.259 "is_configured": true, 00:11:14.259 "data_offset": 2048, 00:11:14.259 "data_size": 63488 00:11:14.259 }, 00:11:14.259 { 00:11:14.259 "name": null, 00:11:14.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.259 "is_configured": false, 00:11:14.259 "data_offset": 0, 00:11:14.259 "data_size": 63488 00:11:14.259 }, 00:11:14.259 { 00:11:14.259 "name": null, 00:11:14.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.259 "is_configured": false, 00:11:14.259 "data_offset": 2048, 00:11:14.259 "data_size": 63488 00:11:14.259 }, 00:11:14.259 { 00:11:14.259 "name": null, 00:11:14.259 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.259 "is_configured": false, 00:11:14.259 "data_offset": 2048, 00:11:14.259 "data_size": 63488 00:11:14.259 } 00:11:14.259 ] 00:11:14.259 }' 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.259 02:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.518 [2024-11-28 02:26:48.155586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:14.518 [2024-11-28 02:26:48.155667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.518 [2024-11-28 02:26:48.155693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:14.518 [2024-11-28 02:26:48.155705] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.518 [2024-11-28 02:26:48.156208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.518 [2024-11-28 02:26:48.156284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:14.518 [2024-11-28 02:26:48.156390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:14.518 [2024-11-28 02:26:48.156418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:14.518 pt2 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.518 [2024-11-28 02:26:48.167536] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:14.518 [2024-11-28 02:26:48.167645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.518 [2024-11-28 02:26:48.167672] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:14.518 [2024-11-28 02:26:48.167683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.518 [2024-11-28 02:26:48.168110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.518 [2024-11-28 02:26:48.168129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:14.518 [2024-11-28 02:26:48.168204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:14.518 [2024-11-28 02:26:48.168226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:14.518 pt3 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.518 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.518 [2024-11-28 02:26:48.179472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:14.518 [2024-11-28 02:26:48.179524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.518 [2024-11-28 02:26:48.179544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:14.519 [2024-11-28 02:26:48.179555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.519 [2024-11-28 02:26:48.179964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.519 [2024-11-28 02:26:48.179988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:14.519 [2024-11-28 02:26:48.180059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:14.519 [2024-11-28 02:26:48.180096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:14.519 [2024-11-28 02:26:48.180264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:14.519 [2024-11-28 02:26:48.180274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:14.519 [2024-11-28 02:26:48.180517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:14.519 [2024-11-28 02:26:48.180689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:14.519 [2024-11-28 02:26:48.180703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:14.519 [2024-11-28 02:26:48.180852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.519 pt4 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.519 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.778 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.778 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.778 "name": "raid_bdev1", 00:11:14.778 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:14.778 "strip_size_kb": 0, 00:11:14.778 "state": "online", 00:11:14.778 "raid_level": "raid1", 00:11:14.778 "superblock": true, 00:11:14.778 "num_base_bdevs": 4, 00:11:14.778 "num_base_bdevs_discovered": 4, 00:11:14.778 "num_base_bdevs_operational": 4, 00:11:14.778 "base_bdevs_list": [ 00:11:14.778 { 00:11:14.778 "name": "pt1", 00:11:14.778 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.778 "is_configured": true, 00:11:14.778 "data_offset": 2048, 00:11:14.778 "data_size": 63488 00:11:14.778 }, 00:11:14.778 { 00:11:14.778 "name": "pt2", 00:11:14.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.778 "is_configured": true, 00:11:14.778 "data_offset": 2048, 00:11:14.778 "data_size": 63488 00:11:14.778 }, 00:11:14.778 { 00:11:14.778 "name": "pt3", 00:11:14.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.778 "is_configured": true, 00:11:14.778 "data_offset": 2048, 00:11:14.778 "data_size": 63488 00:11:14.778 }, 00:11:14.778 { 00:11:14.778 "name": "pt4", 00:11:14.778 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.778 "is_configured": true, 00:11:14.778 "data_offset": 2048, 00:11:14.778 "data_size": 63488 00:11:14.778 } 00:11:14.778 ] 00:11:14.778 }' 00:11:14.778 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.778 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.038 [2024-11-28 02:26:48.675092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.038 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.038 "name": "raid_bdev1", 00:11:15.038 "aliases": [ 00:11:15.038 "d86a1bb2-9451-4ddc-911e-4665677bd2e1" 00:11:15.038 ], 00:11:15.038 "product_name": "Raid Volume", 00:11:15.038 "block_size": 512, 00:11:15.038 "num_blocks": 63488, 00:11:15.038 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:15.038 "assigned_rate_limits": { 00:11:15.038 "rw_ios_per_sec": 0, 00:11:15.038 "rw_mbytes_per_sec": 0, 00:11:15.038 "r_mbytes_per_sec": 0, 00:11:15.038 "w_mbytes_per_sec": 0 00:11:15.038 }, 00:11:15.038 "claimed": false, 00:11:15.038 "zoned": false, 00:11:15.038 "supported_io_types": { 00:11:15.038 "read": true, 00:11:15.038 "write": true, 00:11:15.038 "unmap": false, 00:11:15.038 "flush": false, 00:11:15.038 "reset": true, 00:11:15.038 "nvme_admin": false, 00:11:15.038 "nvme_io": false, 00:11:15.038 "nvme_io_md": false, 00:11:15.038 "write_zeroes": true, 00:11:15.038 "zcopy": false, 00:11:15.038 "get_zone_info": false, 00:11:15.038 "zone_management": false, 00:11:15.038 "zone_append": false, 00:11:15.038 "compare": false, 00:11:15.038 "compare_and_write": false, 00:11:15.038 "abort": false, 00:11:15.038 "seek_hole": false, 00:11:15.038 "seek_data": false, 00:11:15.038 "copy": false, 00:11:15.038 "nvme_iov_md": false 00:11:15.038 }, 00:11:15.038 "memory_domains": [ 00:11:15.038 { 00:11:15.038 "dma_device_id": "system", 00:11:15.038 "dma_device_type": 1 00:11:15.038 }, 00:11:15.038 { 00:11:15.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.038 "dma_device_type": 2 00:11:15.038 }, 00:11:15.038 { 00:11:15.038 "dma_device_id": "system", 00:11:15.038 "dma_device_type": 1 00:11:15.038 }, 00:11:15.038 { 00:11:15.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.038 "dma_device_type": 2 00:11:15.038 }, 00:11:15.038 { 00:11:15.038 "dma_device_id": "system", 00:11:15.038 "dma_device_type": 1 00:11:15.038 }, 00:11:15.038 { 00:11:15.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.038 "dma_device_type": 2 00:11:15.038 }, 00:11:15.038 { 00:11:15.038 "dma_device_id": "system", 00:11:15.038 "dma_device_type": 1 00:11:15.038 }, 00:11:15.038 { 00:11:15.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.038 "dma_device_type": 2 00:11:15.038 } 00:11:15.038 ], 00:11:15.038 "driver_specific": { 00:11:15.038 "raid": { 00:11:15.038 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:15.038 "strip_size_kb": 0, 00:11:15.038 "state": "online", 00:11:15.038 "raid_level": "raid1", 00:11:15.038 "superblock": true, 00:11:15.038 "num_base_bdevs": 4, 00:11:15.038 "num_base_bdevs_discovered": 4, 00:11:15.038 "num_base_bdevs_operational": 4, 00:11:15.038 "base_bdevs_list": [ 00:11:15.038 { 00:11:15.038 "name": "pt1", 00:11:15.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.038 "is_configured": true, 00:11:15.038 "data_offset": 2048, 00:11:15.038 "data_size": 63488 00:11:15.038 }, 00:11:15.038 { 00:11:15.038 "name": "pt2", 00:11:15.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.038 "is_configured": true, 00:11:15.038 "data_offset": 2048, 00:11:15.038 "data_size": 63488 00:11:15.038 }, 00:11:15.038 { 00:11:15.038 "name": "pt3", 00:11:15.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.038 "is_configured": true, 00:11:15.038 "data_offset": 2048, 00:11:15.038 "data_size": 63488 00:11:15.038 }, 00:11:15.038 { 00:11:15.038 "name": "pt4", 00:11:15.038 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.038 "is_configured": true, 00:11:15.038 "data_offset": 2048, 00:11:15.038 "data_size": 63488 00:11:15.038 } 00:11:15.038 ] 00:11:15.038 } 00:11:15.038 } 00:11:15.038 }' 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:15.297 pt2 00:11:15.297 pt3 00:11:15.297 pt4' 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.297 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.298 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.298 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.298 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.298 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:15.298 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.298 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.298 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.298 02:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.557 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.557 02:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.557 [2024-11-28 02:26:49.010402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d86a1bb2-9451-4ddc-911e-4665677bd2e1 '!=' d86a1bb2-9451-4ddc-911e-4665677bd2e1 ']' 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.557 [2024-11-28 02:26:49.050109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.557 "name": "raid_bdev1", 00:11:15.557 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:15.557 "strip_size_kb": 0, 00:11:15.557 "state": "online", 00:11:15.557 "raid_level": "raid1", 00:11:15.557 "superblock": true, 00:11:15.557 "num_base_bdevs": 4, 00:11:15.557 "num_base_bdevs_discovered": 3, 00:11:15.557 "num_base_bdevs_operational": 3, 00:11:15.557 "base_bdevs_list": [ 00:11:15.557 { 00:11:15.557 "name": null, 00:11:15.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.557 "is_configured": false, 00:11:15.557 "data_offset": 0, 00:11:15.557 "data_size": 63488 00:11:15.557 }, 00:11:15.557 { 00:11:15.557 "name": "pt2", 00:11:15.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.557 "is_configured": true, 00:11:15.557 "data_offset": 2048, 00:11:15.557 "data_size": 63488 00:11:15.557 }, 00:11:15.557 { 00:11:15.557 "name": "pt3", 00:11:15.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.557 "is_configured": true, 00:11:15.557 "data_offset": 2048, 00:11:15.557 "data_size": 63488 00:11:15.557 }, 00:11:15.557 { 00:11:15.557 "name": "pt4", 00:11:15.557 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.557 "is_configured": true, 00:11:15.557 "data_offset": 2048, 00:11:15.557 "data_size": 63488 00:11:15.557 } 00:11:15.557 ] 00:11:15.557 }' 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.557 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.816 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:15.816 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.816 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.816 [2024-11-28 02:26:49.469362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.816 [2024-11-28 02:26:49.469452] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.816 [2024-11-28 02:26:49.469562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.816 [2024-11-28 02:26:49.469683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.816 [2024-11-28 02:26:49.469742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:15.816 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.816 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.816 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.816 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:15.816 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.816 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:16.075 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.076 [2024-11-28 02:26:49.557186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:16.076 [2024-11-28 02:26:49.557246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.076 [2024-11-28 02:26:49.557267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:16.076 [2024-11-28 02:26:49.557279] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.076 [2024-11-28 02:26:49.559426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.076 [2024-11-28 02:26:49.559470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:16.076 [2024-11-28 02:26:49.559557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:16.076 [2024-11-28 02:26:49.559604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.076 pt2 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.076 "name": "raid_bdev1", 00:11:16.076 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:16.076 "strip_size_kb": 0, 00:11:16.076 "state": "configuring", 00:11:16.076 "raid_level": "raid1", 00:11:16.076 "superblock": true, 00:11:16.076 "num_base_bdevs": 4, 00:11:16.076 "num_base_bdevs_discovered": 1, 00:11:16.076 "num_base_bdevs_operational": 3, 00:11:16.076 "base_bdevs_list": [ 00:11:16.076 { 00:11:16.076 "name": null, 00:11:16.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.076 "is_configured": false, 00:11:16.076 "data_offset": 2048, 00:11:16.076 "data_size": 63488 00:11:16.076 }, 00:11:16.076 { 00:11:16.076 "name": "pt2", 00:11:16.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.076 "is_configured": true, 00:11:16.076 "data_offset": 2048, 00:11:16.076 "data_size": 63488 00:11:16.076 }, 00:11:16.076 { 00:11:16.076 "name": null, 00:11:16.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.076 "is_configured": false, 00:11:16.076 "data_offset": 2048, 00:11:16.076 "data_size": 63488 00:11:16.076 }, 00:11:16.076 { 00:11:16.076 "name": null, 00:11:16.076 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.076 "is_configured": false, 00:11:16.076 "data_offset": 2048, 00:11:16.076 "data_size": 63488 00:11:16.076 } 00:11:16.076 ] 00:11:16.076 }' 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.076 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.335 [2024-11-28 02:26:49.968541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:16.335 [2024-11-28 02:26:49.968677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.335 [2024-11-28 02:26:49.968724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:16.335 [2024-11-28 02:26:49.968759] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.335 [2024-11-28 02:26:49.969296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.335 [2024-11-28 02:26:49.969373] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:16.335 [2024-11-28 02:26:49.969509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:16.335 [2024-11-28 02:26:49.969567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:16.335 pt3 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.335 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.336 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.336 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.336 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.336 02:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.336 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.336 02:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.336 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.594 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.594 "name": "raid_bdev1", 00:11:16.594 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:16.594 "strip_size_kb": 0, 00:11:16.594 "state": "configuring", 00:11:16.594 "raid_level": "raid1", 00:11:16.594 "superblock": true, 00:11:16.594 "num_base_bdevs": 4, 00:11:16.594 "num_base_bdevs_discovered": 2, 00:11:16.594 "num_base_bdevs_operational": 3, 00:11:16.594 "base_bdevs_list": [ 00:11:16.594 { 00:11:16.594 "name": null, 00:11:16.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.594 "is_configured": false, 00:11:16.594 "data_offset": 2048, 00:11:16.594 "data_size": 63488 00:11:16.594 }, 00:11:16.594 { 00:11:16.594 "name": "pt2", 00:11:16.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.594 "is_configured": true, 00:11:16.594 "data_offset": 2048, 00:11:16.594 "data_size": 63488 00:11:16.594 }, 00:11:16.594 { 00:11:16.594 "name": "pt3", 00:11:16.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.594 "is_configured": true, 00:11:16.594 "data_offset": 2048, 00:11:16.594 "data_size": 63488 00:11:16.594 }, 00:11:16.594 { 00:11:16.594 "name": null, 00:11:16.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.594 "is_configured": false, 00:11:16.594 "data_offset": 2048, 00:11:16.594 "data_size": 63488 00:11:16.594 } 00:11:16.594 ] 00:11:16.594 }' 00:11:16.594 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.594 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.854 [2024-11-28 02:26:50.407813] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:16.854 [2024-11-28 02:26:50.407897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.854 [2024-11-28 02:26:50.407941] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:16.854 [2024-11-28 02:26:50.407954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.854 [2024-11-28 02:26:50.408421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.854 [2024-11-28 02:26:50.408457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:16.854 [2024-11-28 02:26:50.408557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:16.854 [2024-11-28 02:26:50.408584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:16.854 [2024-11-28 02:26:50.408732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:16.854 [2024-11-28 02:26:50.408748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:16.854 [2024-11-28 02:26:50.409019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:16.854 [2024-11-28 02:26:50.409193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:16.854 [2024-11-28 02:26:50.409207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:16.854 [2024-11-28 02:26:50.409363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.854 pt4 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.854 "name": "raid_bdev1", 00:11:16.854 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:16.854 "strip_size_kb": 0, 00:11:16.854 "state": "online", 00:11:16.854 "raid_level": "raid1", 00:11:16.854 "superblock": true, 00:11:16.854 "num_base_bdevs": 4, 00:11:16.854 "num_base_bdevs_discovered": 3, 00:11:16.854 "num_base_bdevs_operational": 3, 00:11:16.854 "base_bdevs_list": [ 00:11:16.854 { 00:11:16.854 "name": null, 00:11:16.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.854 "is_configured": false, 00:11:16.854 "data_offset": 2048, 00:11:16.854 "data_size": 63488 00:11:16.854 }, 00:11:16.854 { 00:11:16.854 "name": "pt2", 00:11:16.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.854 "is_configured": true, 00:11:16.854 "data_offset": 2048, 00:11:16.854 "data_size": 63488 00:11:16.854 }, 00:11:16.854 { 00:11:16.854 "name": "pt3", 00:11:16.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.854 "is_configured": true, 00:11:16.854 "data_offset": 2048, 00:11:16.854 "data_size": 63488 00:11:16.854 }, 00:11:16.854 { 00:11:16.854 "name": "pt4", 00:11:16.854 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.854 "is_configured": true, 00:11:16.854 "data_offset": 2048, 00:11:16.854 "data_size": 63488 00:11:16.854 } 00:11:16.854 ] 00:11:16.854 }' 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.854 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.422 [2024-11-28 02:26:50.823101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.422 [2024-11-28 02:26:50.823194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.422 [2024-11-28 02:26:50.823307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.422 [2024-11-28 02:26:50.823404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.422 [2024-11-28 02:26:50.823466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.422 [2024-11-28 02:26:50.898992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:17.422 [2024-11-28 02:26:50.899106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.422 [2024-11-28 02:26:50.899148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:17.422 [2024-11-28 02:26:50.899198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.422 [2024-11-28 02:26:50.901469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.422 [2024-11-28 02:26:50.901560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:17.422 [2024-11-28 02:26:50.901663] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:17.422 [2024-11-28 02:26:50.901721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.422 [2024-11-28 02:26:50.901897] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:17.422 [2024-11-28 02:26:50.901914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.422 [2024-11-28 02:26:50.901931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:17.422 [2024-11-28 02:26:50.902023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.422 [2024-11-28 02:26:50.902143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.422 pt1 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.422 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.422 "name": "raid_bdev1", 00:11:17.422 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:17.422 "strip_size_kb": 0, 00:11:17.422 "state": "configuring", 00:11:17.422 "raid_level": "raid1", 00:11:17.422 "superblock": true, 00:11:17.422 "num_base_bdevs": 4, 00:11:17.423 "num_base_bdevs_discovered": 2, 00:11:17.423 "num_base_bdevs_operational": 3, 00:11:17.423 "base_bdevs_list": [ 00:11:17.423 { 00:11:17.423 "name": null, 00:11:17.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.423 "is_configured": false, 00:11:17.423 "data_offset": 2048, 00:11:17.423 "data_size": 63488 00:11:17.423 }, 00:11:17.423 { 00:11:17.423 "name": "pt2", 00:11:17.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.423 "is_configured": true, 00:11:17.423 "data_offset": 2048, 00:11:17.423 "data_size": 63488 00:11:17.423 }, 00:11:17.423 { 00:11:17.423 "name": "pt3", 00:11:17.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.423 "is_configured": true, 00:11:17.423 "data_offset": 2048, 00:11:17.423 "data_size": 63488 00:11:17.423 }, 00:11:17.423 { 00:11:17.423 "name": null, 00:11:17.423 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.423 "is_configured": false, 00:11:17.423 "data_offset": 2048, 00:11:17.423 "data_size": 63488 00:11:17.423 } 00:11:17.423 ] 00:11:17.423 }' 00:11:17.423 02:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.423 02:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.682 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:17.682 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.682 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.682 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:17.682 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.682 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:17.682 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:17.682 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.682 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.941 [2024-11-28 02:26:51.366191] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:17.941 [2024-11-28 02:26:51.366327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.941 [2024-11-28 02:26:51.366360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:17.941 [2024-11-28 02:26:51.366372] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.941 [2024-11-28 02:26:51.366845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.941 [2024-11-28 02:26:51.366864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:17.941 [2024-11-28 02:26:51.366981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:17.941 [2024-11-28 02:26:51.367008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:17.941 [2024-11-28 02:26:51.367152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:17.941 [2024-11-28 02:26:51.367169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:17.941 [2024-11-28 02:26:51.367435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:17.941 [2024-11-28 02:26:51.367604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:17.941 [2024-11-28 02:26:51.367636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:17.941 [2024-11-28 02:26:51.367798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.941 pt4 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.941 "name": "raid_bdev1", 00:11:17.941 "uuid": "d86a1bb2-9451-4ddc-911e-4665677bd2e1", 00:11:17.941 "strip_size_kb": 0, 00:11:17.941 "state": "online", 00:11:17.941 "raid_level": "raid1", 00:11:17.941 "superblock": true, 00:11:17.941 "num_base_bdevs": 4, 00:11:17.941 "num_base_bdevs_discovered": 3, 00:11:17.941 "num_base_bdevs_operational": 3, 00:11:17.941 "base_bdevs_list": [ 00:11:17.941 { 00:11:17.941 "name": null, 00:11:17.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.941 "is_configured": false, 00:11:17.941 "data_offset": 2048, 00:11:17.941 "data_size": 63488 00:11:17.941 }, 00:11:17.941 { 00:11:17.941 "name": "pt2", 00:11:17.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.941 "is_configured": true, 00:11:17.941 "data_offset": 2048, 00:11:17.941 "data_size": 63488 00:11:17.941 }, 00:11:17.941 { 00:11:17.941 "name": "pt3", 00:11:17.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.941 "is_configured": true, 00:11:17.941 "data_offset": 2048, 00:11:17.941 "data_size": 63488 00:11:17.941 }, 00:11:17.941 { 00:11:17.941 "name": "pt4", 00:11:17.941 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.941 "is_configured": true, 00:11:17.941 "data_offset": 2048, 00:11:17.941 "data_size": 63488 00:11:17.941 } 00:11:17.941 ] 00:11:17.941 }' 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.941 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.200 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:18.200 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:18.200 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.200 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.200 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.200 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:18.200 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.200 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:18.200 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.200 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.200 [2024-11-28 02:26:51.877697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d86a1bb2-9451-4ddc-911e-4665677bd2e1 '!=' d86a1bb2-9451-4ddc-911e-4665677bd2e1 ']' 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74300 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74300 ']' 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74300 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74300 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74300' 00:11:18.459 killing process with pid 74300 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74300 00:11:18.459 [2024-11-28 02:26:51.958771] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.459 [2024-11-28 02:26:51.958878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.459 [2024-11-28 02:26:51.958975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.459 [2024-11-28 02:26:51.958991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:18.459 02:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74300 00:11:18.718 [2024-11-28 02:26:52.355518] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.096 02:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:20.096 00:11:20.096 real 0m8.479s 00:11:20.096 user 0m13.265s 00:11:20.096 sys 0m1.615s 00:11:20.096 02:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.096 02:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.096 ************************************ 00:11:20.096 END TEST raid_superblock_test 00:11:20.096 ************************************ 00:11:20.096 02:26:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:20.096 02:26:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:20.096 02:26:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.096 02:26:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.096 ************************************ 00:11:20.096 START TEST raid_read_error_test 00:11:20.096 ************************************ 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Uw62bGvIdS 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74793 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74793 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74793 ']' 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.096 02:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.096 [2024-11-28 02:26:53.682215] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:20.096 [2024-11-28 02:26:53.682366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74793 ] 00:11:20.355 [2024-11-28 02:26:53.849574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.355 [2024-11-28 02:26:53.964335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.614 [2024-11-28 02:26:54.173352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.614 [2024-11-28 02:26:54.173397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.872 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.872 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:20.872 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.873 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:20.873 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.873 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.873 BaseBdev1_malloc 00:11:20.873 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.873 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:20.873 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.873 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.132 true 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.132 [2024-11-28 02:26:54.559563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:21.132 [2024-11-28 02:26:54.559634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.132 [2024-11-28 02:26:54.559656] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:21.132 [2024-11-28 02:26:54.559670] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.132 [2024-11-28 02:26:54.561856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.132 [2024-11-28 02:26:54.561905] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:21.132 BaseBdev1 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.132 BaseBdev2_malloc 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.132 true 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.132 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.132 [2024-11-28 02:26:54.627575] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:21.132 [2024-11-28 02:26:54.627645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.132 [2024-11-28 02:26:54.627664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:21.132 [2024-11-28 02:26:54.627678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.133 [2024-11-28 02:26:54.629799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.133 [2024-11-28 02:26:54.629849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:21.133 BaseBdev2 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 BaseBdev3_malloc 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 true 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 [2024-11-28 02:26:54.715663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:21.133 [2024-11-28 02:26:54.715722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.133 [2024-11-28 02:26:54.715741] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:21.133 [2024-11-28 02:26:54.715754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.133 [2024-11-28 02:26:54.717899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.133 [2024-11-28 02:26:54.717960] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:21.133 BaseBdev3 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 BaseBdev4_malloc 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 true 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 [2024-11-28 02:26:54.771633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:21.133 [2024-11-28 02:26:54.771692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.133 [2024-11-28 02:26:54.771711] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:21.133 [2024-11-28 02:26:54.771724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.133 [2024-11-28 02:26:54.773813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.133 [2024-11-28 02:26:54.773905] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:21.133 BaseBdev4 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 [2024-11-28 02:26:54.779699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.133 [2024-11-28 02:26:54.781570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.133 [2024-11-28 02:26:54.781650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.133 [2024-11-28 02:26:54.781716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:21.133 [2024-11-28 02:26:54.781970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:21.133 [2024-11-28 02:26:54.781987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.133 [2024-11-28 02:26:54.782245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:21.133 [2024-11-28 02:26:54.782442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:21.133 [2024-11-28 02:26:54.782454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:21.133 [2024-11-28 02:26:54.782624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.393 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.393 "name": "raid_bdev1", 00:11:21.393 "uuid": "a6bb4841-4be0-4653-9b52-0c25c37357be", 00:11:21.393 "strip_size_kb": 0, 00:11:21.393 "state": "online", 00:11:21.393 "raid_level": "raid1", 00:11:21.393 "superblock": true, 00:11:21.393 "num_base_bdevs": 4, 00:11:21.393 "num_base_bdevs_discovered": 4, 00:11:21.393 "num_base_bdevs_operational": 4, 00:11:21.393 "base_bdevs_list": [ 00:11:21.393 { 00:11:21.393 "name": "BaseBdev1", 00:11:21.393 "uuid": "8c3db49e-968c-532c-b707-110f762b5356", 00:11:21.393 "is_configured": true, 00:11:21.393 "data_offset": 2048, 00:11:21.393 "data_size": 63488 00:11:21.393 }, 00:11:21.393 { 00:11:21.393 "name": "BaseBdev2", 00:11:21.393 "uuid": "3bfc8845-89be-50ea-841e-f0a6f9df33d5", 00:11:21.393 "is_configured": true, 00:11:21.393 "data_offset": 2048, 00:11:21.393 "data_size": 63488 00:11:21.393 }, 00:11:21.393 { 00:11:21.393 "name": "BaseBdev3", 00:11:21.393 "uuid": "57328d5e-6518-5ca4-b0ee-1ffd28c0f3f3", 00:11:21.393 "is_configured": true, 00:11:21.393 "data_offset": 2048, 00:11:21.393 "data_size": 63488 00:11:21.393 }, 00:11:21.393 { 00:11:21.393 "name": "BaseBdev4", 00:11:21.393 "uuid": "959f1736-cf0a-5173-ba87-4cf6ee670f02", 00:11:21.393 "is_configured": true, 00:11:21.393 "data_offset": 2048, 00:11:21.393 "data_size": 63488 00:11:21.393 } 00:11:21.393 ] 00:11:21.393 }' 00:11:21.393 02:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.393 02:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.651 02:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:21.651 02:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:21.910 [2024-11-28 02:26:55.360155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:22.850 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:22.850 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.850 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.850 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.850 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:22.850 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:22.850 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:22.850 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:22.850 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:22.850 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.851 "name": "raid_bdev1", 00:11:22.851 "uuid": "a6bb4841-4be0-4653-9b52-0c25c37357be", 00:11:22.851 "strip_size_kb": 0, 00:11:22.851 "state": "online", 00:11:22.851 "raid_level": "raid1", 00:11:22.851 "superblock": true, 00:11:22.851 "num_base_bdevs": 4, 00:11:22.851 "num_base_bdevs_discovered": 4, 00:11:22.851 "num_base_bdevs_operational": 4, 00:11:22.851 "base_bdevs_list": [ 00:11:22.851 { 00:11:22.851 "name": "BaseBdev1", 00:11:22.851 "uuid": "8c3db49e-968c-532c-b707-110f762b5356", 00:11:22.851 "is_configured": true, 00:11:22.851 "data_offset": 2048, 00:11:22.851 "data_size": 63488 00:11:22.851 }, 00:11:22.851 { 00:11:22.851 "name": "BaseBdev2", 00:11:22.851 "uuid": "3bfc8845-89be-50ea-841e-f0a6f9df33d5", 00:11:22.851 "is_configured": true, 00:11:22.851 "data_offset": 2048, 00:11:22.851 "data_size": 63488 00:11:22.851 }, 00:11:22.851 { 00:11:22.851 "name": "BaseBdev3", 00:11:22.851 "uuid": "57328d5e-6518-5ca4-b0ee-1ffd28c0f3f3", 00:11:22.851 "is_configured": true, 00:11:22.851 "data_offset": 2048, 00:11:22.851 "data_size": 63488 00:11:22.851 }, 00:11:22.851 { 00:11:22.851 "name": "BaseBdev4", 00:11:22.851 "uuid": "959f1736-cf0a-5173-ba87-4cf6ee670f02", 00:11:22.851 "is_configured": true, 00:11:22.851 "data_offset": 2048, 00:11:22.851 "data_size": 63488 00:11:22.851 } 00:11:22.851 ] 00:11:22.851 }' 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.851 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.111 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:23.111 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.111 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.432 [2024-11-28 02:26:56.790134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.432 [2024-11-28 02:26:56.790173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.432 [2024-11-28 02:26:56.792997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.432 [2024-11-28 02:26:56.793065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.432 [2024-11-28 02:26:56.793185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.432 [2024-11-28 02:26:56.793200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:23.432 { 00:11:23.432 "results": [ 00:11:23.432 { 00:11:23.432 "job": "raid_bdev1", 00:11:23.432 "core_mask": "0x1", 00:11:23.432 "workload": "randrw", 00:11:23.432 "percentage": 50, 00:11:23.432 "status": "finished", 00:11:23.432 "queue_depth": 1, 00:11:23.432 "io_size": 131072, 00:11:23.432 "runtime": 1.430963, 00:11:23.432 "iops": 10506.910381330614, 00:11:23.432 "mibps": 1313.3637976663267, 00:11:23.432 "io_failed": 0, 00:11:23.432 "io_timeout": 0, 00:11:23.432 "avg_latency_us": 92.27712153446906, 00:11:23.432 "min_latency_us": 23.699563318777294, 00:11:23.432 "max_latency_us": 1473.844541484716 00:11:23.432 } 00:11:23.432 ], 00:11:23.432 "core_count": 1 00:11:23.432 } 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74793 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74793 ']' 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74793 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74793 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74793' 00:11:23.432 killing process with pid 74793 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74793 00:11:23.432 [2024-11-28 02:26:56.827145] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.432 02:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74793 00:11:23.713 [2024-11-28 02:26:57.136089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.662 02:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Uw62bGvIdS 00:11:24.662 02:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:24.662 02:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:24.662 02:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:24.662 02:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:24.662 02:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:24.662 02:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:24.662 ************************************ 00:11:24.662 END TEST raid_read_error_test 00:11:24.662 02:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:24.662 00:11:24.662 real 0m4.743s 00:11:24.662 user 0m5.620s 00:11:24.662 sys 0m0.627s 00:11:24.662 02:26:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.662 02:26:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.662 ************************************ 00:11:24.923 02:26:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:24.923 02:26:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:24.923 02:26:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.923 02:26:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.923 ************************************ 00:11:24.923 START TEST raid_write_error_test 00:11:24.923 ************************************ 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kJqmSSlLAi 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74933 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74933 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74933 ']' 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.923 02:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.923 [2024-11-28 02:26:58.480632] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:24.923 [2024-11-28 02:26:58.480820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74933 ] 00:11:25.183 [2024-11-28 02:26:58.632578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.183 [2024-11-28 02:26:58.745748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.443 [2024-11-28 02:26:58.943880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.443 [2024-11-28 02:26:58.943972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.703 BaseBdev1_malloc 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.703 true 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.703 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.703 [2024-11-28 02:26:59.378060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:25.703 [2024-11-28 02:26:59.378120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.703 [2024-11-28 02:26:59.378141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:25.703 [2024-11-28 02:26:59.378154] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.703 [2024-11-28 02:26:59.380203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.703 [2024-11-28 02:26:59.380250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:25.974 BaseBdev1 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 BaseBdev2_malloc 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 true 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 [2024-11-28 02:26:59.444031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:25.974 [2024-11-28 02:26:59.444087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.974 [2024-11-28 02:26:59.444107] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:25.974 [2024-11-28 02:26:59.444120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.974 [2024-11-28 02:26:59.446154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.974 [2024-11-28 02:26:59.446197] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:25.974 BaseBdev2 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 BaseBdev3_malloc 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 true 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 [2024-11-28 02:26:59.523576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:25.974 [2024-11-28 02:26:59.523641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.974 [2024-11-28 02:26:59.523662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:25.974 [2024-11-28 02:26:59.523675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.974 [2024-11-28 02:26:59.525777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.974 [2024-11-28 02:26:59.525823] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:25.974 BaseBdev3 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 BaseBdev4_malloc 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 true 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 [2024-11-28 02:26:59.595460] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:25.974 [2024-11-28 02:26:59.595520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.974 [2024-11-28 02:26:59.595542] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:25.974 [2024-11-28 02:26:59.595556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.974 [2024-11-28 02:26:59.597807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.974 [2024-11-28 02:26:59.597856] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:25.974 BaseBdev4 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 [2024-11-28 02:26:59.607494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.974 [2024-11-28 02:26:59.609371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.974 [2024-11-28 02:26:59.609450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.974 [2024-11-28 02:26:59.609515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.974 [2024-11-28 02:26:59.609758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:25.974 [2024-11-28 02:26:59.609774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:25.974 [2024-11-28 02:26:59.610064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:25.974 [2024-11-28 02:26:59.610285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:25.974 [2024-11-28 02:26:59.610297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:25.974 [2024-11-28 02:26:59.610488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.974 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.233 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.233 "name": "raid_bdev1", 00:11:26.233 "uuid": "a15fe72a-89c9-40ea-9fc6-b662435e7ed4", 00:11:26.233 "strip_size_kb": 0, 00:11:26.233 "state": "online", 00:11:26.233 "raid_level": "raid1", 00:11:26.233 "superblock": true, 00:11:26.233 "num_base_bdevs": 4, 00:11:26.233 "num_base_bdevs_discovered": 4, 00:11:26.233 "num_base_bdevs_operational": 4, 00:11:26.233 "base_bdevs_list": [ 00:11:26.233 { 00:11:26.233 "name": "BaseBdev1", 00:11:26.233 "uuid": "f9f681e3-9a20-5f15-8f59-2c99d9cb49f5", 00:11:26.233 "is_configured": true, 00:11:26.233 "data_offset": 2048, 00:11:26.233 "data_size": 63488 00:11:26.233 }, 00:11:26.233 { 00:11:26.233 "name": "BaseBdev2", 00:11:26.233 "uuid": "82d5a073-1502-5cf9-902c-28ebc6fafbba", 00:11:26.233 "is_configured": true, 00:11:26.233 "data_offset": 2048, 00:11:26.233 "data_size": 63488 00:11:26.233 }, 00:11:26.233 { 00:11:26.233 "name": "BaseBdev3", 00:11:26.233 "uuid": "30682beb-353d-54c7-a6a9-d7d45487cff8", 00:11:26.233 "is_configured": true, 00:11:26.233 "data_offset": 2048, 00:11:26.233 "data_size": 63488 00:11:26.233 }, 00:11:26.233 { 00:11:26.233 "name": "BaseBdev4", 00:11:26.233 "uuid": "0be5fca7-ddac-534b-80d4-2bd95bfe5d1e", 00:11:26.233 "is_configured": true, 00:11:26.233 "data_offset": 2048, 00:11:26.233 "data_size": 63488 00:11:26.233 } 00:11:26.233 ] 00:11:26.233 }' 00:11:26.233 02:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.233 02:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.490 02:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:26.490 02:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:26.490 [2024-11-28 02:27:00.116118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.424 [2024-11-28 02:27:01.031520] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:27.424 [2024-11-28 02:27:01.031710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.424 [2024-11-28 02:27:01.032005] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.424 "name": "raid_bdev1", 00:11:27.424 "uuid": "a15fe72a-89c9-40ea-9fc6-b662435e7ed4", 00:11:27.424 "strip_size_kb": 0, 00:11:27.424 "state": "online", 00:11:27.424 "raid_level": "raid1", 00:11:27.424 "superblock": true, 00:11:27.424 "num_base_bdevs": 4, 00:11:27.424 "num_base_bdevs_discovered": 3, 00:11:27.424 "num_base_bdevs_operational": 3, 00:11:27.424 "base_bdevs_list": [ 00:11:27.424 { 00:11:27.424 "name": null, 00:11:27.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.424 "is_configured": false, 00:11:27.424 "data_offset": 0, 00:11:27.424 "data_size": 63488 00:11:27.424 }, 00:11:27.424 { 00:11:27.424 "name": "BaseBdev2", 00:11:27.424 "uuid": "82d5a073-1502-5cf9-902c-28ebc6fafbba", 00:11:27.424 "is_configured": true, 00:11:27.424 "data_offset": 2048, 00:11:27.424 "data_size": 63488 00:11:27.424 }, 00:11:27.424 { 00:11:27.424 "name": "BaseBdev3", 00:11:27.424 "uuid": "30682beb-353d-54c7-a6a9-d7d45487cff8", 00:11:27.424 "is_configured": true, 00:11:27.424 "data_offset": 2048, 00:11:27.424 "data_size": 63488 00:11:27.424 }, 00:11:27.424 { 00:11:27.424 "name": "BaseBdev4", 00:11:27.424 "uuid": "0be5fca7-ddac-534b-80d4-2bd95bfe5d1e", 00:11:27.424 "is_configured": true, 00:11:27.424 "data_offset": 2048, 00:11:27.424 "data_size": 63488 00:11:27.424 } 00:11:27.424 ] 00:11:27.424 }' 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.424 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.988 [2024-11-28 02:27:01.479435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.988 [2024-11-28 02:27:01.479473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.988 [2024-11-28 02:27:01.482343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.988 [2024-11-28 02:27:01.482394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.988 [2024-11-28 02:27:01.482501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.988 [2024-11-28 02:27:01.482516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:27.988 { 00:11:27.988 "results": [ 00:11:27.988 { 00:11:27.988 "job": "raid_bdev1", 00:11:27.988 "core_mask": "0x1", 00:11:27.988 "workload": "randrw", 00:11:27.988 "percentage": 50, 00:11:27.988 "status": "finished", 00:11:27.988 "queue_depth": 1, 00:11:27.988 "io_size": 131072, 00:11:27.988 "runtime": 1.363992, 00:11:27.988 "iops": 10976.603968351721, 00:11:27.988 "mibps": 1372.0754960439651, 00:11:27.988 "io_failed": 0, 00:11:27.988 "io_timeout": 0, 00:11:27.988 "avg_latency_us": 87.99404722877172, 00:11:27.988 "min_latency_us": 25.823580786026202, 00:11:27.988 "max_latency_us": 1445.2262008733624 00:11:27.988 } 00:11:27.988 ], 00:11:27.988 "core_count": 1 00:11:27.988 } 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74933 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74933 ']' 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74933 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74933 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.988 killing process with pid 74933 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74933' 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74933 00:11:27.988 [2024-11-28 02:27:01.526141] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.988 02:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74933 00:11:28.246 [2024-11-28 02:27:01.857182] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.619 02:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kJqmSSlLAi 00:11:29.619 02:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:29.619 02:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:29.619 02:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:29.619 02:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:29.619 02:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.619 02:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:29.619 ************************************ 00:11:29.619 END TEST raid_write_error_test 00:11:29.619 ************************************ 00:11:29.619 02:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:29.619 00:11:29.619 real 0m4.702s 00:11:29.619 user 0m5.500s 00:11:29.619 sys 0m0.589s 00:11:29.619 02:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.619 02:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.619 02:27:03 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:29.619 02:27:03 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:29.619 02:27:03 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:29.619 02:27:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:29.620 02:27:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.620 02:27:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.620 ************************************ 00:11:29.620 START TEST raid_rebuild_test 00:11:29.620 ************************************ 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75082 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75082 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75082 ']' 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.620 02:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.620 [2024-11-28 02:27:03.252145] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:29.620 [2024-11-28 02:27:03.252358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:29.620 Zero copy mechanism will not be used. 00:11:29.620 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75082 ] 00:11:29.878 [2024-11-28 02:27:03.428389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.878 [2024-11-28 02:27:03.548478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.136 [2024-11-28 02:27:03.748031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.136 [2024-11-28 02:27:03.748149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.702 BaseBdev1_malloc 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.702 [2024-11-28 02:27:04.146809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:30.702 [2024-11-28 02:27:04.146941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.702 [2024-11-28 02:27:04.146971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:30.702 [2024-11-28 02:27:04.146985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.702 [2024-11-28 02:27:04.149149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.702 [2024-11-28 02:27:04.149198] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.702 BaseBdev1 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.702 BaseBdev2_malloc 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.702 [2024-11-28 02:27:04.202296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:30.702 [2024-11-28 02:27:04.202368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.702 [2024-11-28 02:27:04.202392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:30.702 [2024-11-28 02:27:04.202406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.702 [2024-11-28 02:27:04.204533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.702 [2024-11-28 02:27:04.204580] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.702 BaseBdev2 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.702 spare_malloc 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.702 spare_delay 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.702 [2024-11-28 02:27:04.281634] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:30.702 [2024-11-28 02:27:04.281774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.702 [2024-11-28 02:27:04.281801] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:30.702 [2024-11-28 02:27:04.281815] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.702 [2024-11-28 02:27:04.283994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.702 [2024-11-28 02:27:04.284042] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:30.702 spare 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.702 [2024-11-28 02:27:04.293669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.702 [2024-11-28 02:27:04.295492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.702 [2024-11-28 02:27:04.295605] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:30.702 [2024-11-28 02:27:04.295630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:30.702 [2024-11-28 02:27:04.295877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:30.702 [2024-11-28 02:27:04.296078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:30.702 [2024-11-28 02:27:04.296093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:30.702 [2024-11-28 02:27:04.296273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:30.702 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.703 "name": "raid_bdev1", 00:11:30.703 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:30.703 "strip_size_kb": 0, 00:11:30.703 "state": "online", 00:11:30.703 "raid_level": "raid1", 00:11:30.703 "superblock": false, 00:11:30.703 "num_base_bdevs": 2, 00:11:30.703 "num_base_bdevs_discovered": 2, 00:11:30.703 "num_base_bdevs_operational": 2, 00:11:30.703 "base_bdevs_list": [ 00:11:30.703 { 00:11:30.703 "name": "BaseBdev1", 00:11:30.703 "uuid": "7ceb3316-4ccc-5e60-9d3b-4aa057cbbd95", 00:11:30.703 "is_configured": true, 00:11:30.703 "data_offset": 0, 00:11:30.703 "data_size": 65536 00:11:30.703 }, 00:11:30.703 { 00:11:30.703 "name": "BaseBdev2", 00:11:30.703 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:30.703 "is_configured": true, 00:11:30.703 "data_offset": 0, 00:11:30.703 "data_size": 65536 00:11:30.703 } 00:11:30.703 ] 00:11:30.703 }' 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.703 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.270 [2024-11-28 02:27:04.749379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.270 02:27:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:31.529 [2024-11-28 02:27:05.000702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:31.529 /dev/nbd0 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.529 1+0 records in 00:11:31.529 1+0 records out 00:11:31.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539607 s, 7.6 MB/s 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:31.529 02:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:36.803 65536+0 records in 00:11:36.803 65536+0 records out 00:11:36.803 33554432 bytes (34 MB, 32 MiB) copied, 4.42959 s, 7.6 MB/s 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:36.803 [2024-11-28 02:27:09.697306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.803 [2024-11-28 02:27:09.729323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.803 "name": "raid_bdev1", 00:11:36.803 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:36.803 "strip_size_kb": 0, 00:11:36.803 "state": "online", 00:11:36.803 "raid_level": "raid1", 00:11:36.803 "superblock": false, 00:11:36.803 "num_base_bdevs": 2, 00:11:36.803 "num_base_bdevs_discovered": 1, 00:11:36.803 "num_base_bdevs_operational": 1, 00:11:36.803 "base_bdevs_list": [ 00:11:36.803 { 00:11:36.803 "name": null, 00:11:36.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.803 "is_configured": false, 00:11:36.803 "data_offset": 0, 00:11:36.803 "data_size": 65536 00:11:36.803 }, 00:11:36.803 { 00:11:36.803 "name": "BaseBdev2", 00:11:36.803 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:36.803 "is_configured": true, 00:11:36.803 "data_offset": 0, 00:11:36.803 "data_size": 65536 00:11:36.803 } 00:11:36.803 ] 00:11:36.803 }' 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.803 02:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.803 02:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:36.803 02:27:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.803 02:27:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.803 [2024-11-28 02:27:10.156594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:36.803 [2024-11-28 02:27:10.171973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:11:36.803 02:27:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.803 02:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:36.803 [2024-11-28 02:27:10.173950] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.744 "name": "raid_bdev1", 00:11:37.744 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:37.744 "strip_size_kb": 0, 00:11:37.744 "state": "online", 00:11:37.744 "raid_level": "raid1", 00:11:37.744 "superblock": false, 00:11:37.744 "num_base_bdevs": 2, 00:11:37.744 "num_base_bdevs_discovered": 2, 00:11:37.744 "num_base_bdevs_operational": 2, 00:11:37.744 "process": { 00:11:37.744 "type": "rebuild", 00:11:37.744 "target": "spare", 00:11:37.744 "progress": { 00:11:37.744 "blocks": 20480, 00:11:37.744 "percent": 31 00:11:37.744 } 00:11:37.744 }, 00:11:37.744 "base_bdevs_list": [ 00:11:37.744 { 00:11:37.744 "name": "spare", 00:11:37.744 "uuid": "598e4f03-a877-5377-9d28-8c76e5888d69", 00:11:37.744 "is_configured": true, 00:11:37.744 "data_offset": 0, 00:11:37.744 "data_size": 65536 00:11:37.744 }, 00:11:37.744 { 00:11:37.744 "name": "BaseBdev2", 00:11:37.744 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:37.744 "is_configured": true, 00:11:37.744 "data_offset": 0, 00:11:37.744 "data_size": 65536 00:11:37.744 } 00:11:37.744 ] 00:11:37.744 }' 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.744 [2024-11-28 02:27:11.337419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.744 [2024-11-28 02:27:11.379917] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:37.744 [2024-11-28 02:27:11.380016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.744 [2024-11-28 02:27:11.380035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.744 [2024-11-28 02:27:11.380047] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.744 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.005 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.005 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.005 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.005 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.005 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.005 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.005 "name": "raid_bdev1", 00:11:38.005 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:38.005 "strip_size_kb": 0, 00:11:38.005 "state": "online", 00:11:38.005 "raid_level": "raid1", 00:11:38.005 "superblock": false, 00:11:38.005 "num_base_bdevs": 2, 00:11:38.005 "num_base_bdevs_discovered": 1, 00:11:38.005 "num_base_bdevs_operational": 1, 00:11:38.005 "base_bdevs_list": [ 00:11:38.005 { 00:11:38.005 "name": null, 00:11:38.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.005 "is_configured": false, 00:11:38.005 "data_offset": 0, 00:11:38.005 "data_size": 65536 00:11:38.005 }, 00:11:38.005 { 00:11:38.005 "name": "BaseBdev2", 00:11:38.005 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:38.005 "is_configured": true, 00:11:38.005 "data_offset": 0, 00:11:38.005 "data_size": 65536 00:11:38.005 } 00:11:38.005 ] 00:11:38.006 }' 00:11:38.006 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.006 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.267 "name": "raid_bdev1", 00:11:38.267 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:38.267 "strip_size_kb": 0, 00:11:38.267 "state": "online", 00:11:38.267 "raid_level": "raid1", 00:11:38.267 "superblock": false, 00:11:38.267 "num_base_bdevs": 2, 00:11:38.267 "num_base_bdevs_discovered": 1, 00:11:38.267 "num_base_bdevs_operational": 1, 00:11:38.267 "base_bdevs_list": [ 00:11:38.267 { 00:11:38.267 "name": null, 00:11:38.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.267 "is_configured": false, 00:11:38.267 "data_offset": 0, 00:11:38.267 "data_size": 65536 00:11:38.267 }, 00:11:38.267 { 00:11:38.267 "name": "BaseBdev2", 00:11:38.267 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:38.267 "is_configured": true, 00:11:38.267 "data_offset": 0, 00:11:38.267 "data_size": 65536 00:11:38.267 } 00:11:38.267 ] 00:11:38.267 }' 00:11:38.267 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.528 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.528 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.528 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.528 02:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:38.528 02:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.528 02:27:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.528 [2024-11-28 02:27:12.006724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:38.528 [2024-11-28 02:27:12.022629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:11:38.528 02:27:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.528 02:27:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:38.528 [2024-11-28 02:27:12.024472] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.469 "name": "raid_bdev1", 00:11:39.469 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:39.469 "strip_size_kb": 0, 00:11:39.469 "state": "online", 00:11:39.469 "raid_level": "raid1", 00:11:39.469 "superblock": false, 00:11:39.469 "num_base_bdevs": 2, 00:11:39.469 "num_base_bdevs_discovered": 2, 00:11:39.469 "num_base_bdevs_operational": 2, 00:11:39.469 "process": { 00:11:39.469 "type": "rebuild", 00:11:39.469 "target": "spare", 00:11:39.469 "progress": { 00:11:39.469 "blocks": 20480, 00:11:39.469 "percent": 31 00:11:39.469 } 00:11:39.469 }, 00:11:39.469 "base_bdevs_list": [ 00:11:39.469 { 00:11:39.469 "name": "spare", 00:11:39.469 "uuid": "598e4f03-a877-5377-9d28-8c76e5888d69", 00:11:39.469 "is_configured": true, 00:11:39.469 "data_offset": 0, 00:11:39.469 "data_size": 65536 00:11:39.469 }, 00:11:39.469 { 00:11:39.469 "name": "BaseBdev2", 00:11:39.469 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:39.469 "is_configured": true, 00:11:39.469 "data_offset": 0, 00:11:39.469 "data_size": 65536 00:11:39.469 } 00:11:39.469 ] 00:11:39.469 }' 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.469 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=363 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.729 "name": "raid_bdev1", 00:11:39.729 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:39.729 "strip_size_kb": 0, 00:11:39.729 "state": "online", 00:11:39.729 "raid_level": "raid1", 00:11:39.729 "superblock": false, 00:11:39.729 "num_base_bdevs": 2, 00:11:39.729 "num_base_bdevs_discovered": 2, 00:11:39.729 "num_base_bdevs_operational": 2, 00:11:39.729 "process": { 00:11:39.729 "type": "rebuild", 00:11:39.729 "target": "spare", 00:11:39.729 "progress": { 00:11:39.729 "blocks": 22528, 00:11:39.729 "percent": 34 00:11:39.729 } 00:11:39.729 }, 00:11:39.729 "base_bdevs_list": [ 00:11:39.729 { 00:11:39.729 "name": "spare", 00:11:39.729 "uuid": "598e4f03-a877-5377-9d28-8c76e5888d69", 00:11:39.729 "is_configured": true, 00:11:39.729 "data_offset": 0, 00:11:39.729 "data_size": 65536 00:11:39.729 }, 00:11:39.729 { 00:11:39.729 "name": "BaseBdev2", 00:11:39.729 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:39.729 "is_configured": true, 00:11:39.729 "data_offset": 0, 00:11:39.729 "data_size": 65536 00:11:39.729 } 00:11:39.729 ] 00:11:39.729 }' 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.729 02:27:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:40.668 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:40.668 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.668 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.668 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.668 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.668 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.668 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.668 02:27:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.668 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.668 02:27:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.928 02:27:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.928 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.928 "name": "raid_bdev1", 00:11:40.928 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:40.928 "strip_size_kb": 0, 00:11:40.928 "state": "online", 00:11:40.928 "raid_level": "raid1", 00:11:40.928 "superblock": false, 00:11:40.928 "num_base_bdevs": 2, 00:11:40.928 "num_base_bdevs_discovered": 2, 00:11:40.928 "num_base_bdevs_operational": 2, 00:11:40.928 "process": { 00:11:40.928 "type": "rebuild", 00:11:40.928 "target": "spare", 00:11:40.928 "progress": { 00:11:40.928 "blocks": 47104, 00:11:40.928 "percent": 71 00:11:40.928 } 00:11:40.928 }, 00:11:40.928 "base_bdevs_list": [ 00:11:40.928 { 00:11:40.928 "name": "spare", 00:11:40.928 "uuid": "598e4f03-a877-5377-9d28-8c76e5888d69", 00:11:40.928 "is_configured": true, 00:11:40.928 "data_offset": 0, 00:11:40.928 "data_size": 65536 00:11:40.928 }, 00:11:40.928 { 00:11:40.928 "name": "BaseBdev2", 00:11:40.928 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:40.928 "is_configured": true, 00:11:40.928 "data_offset": 0, 00:11:40.928 "data_size": 65536 00:11:40.928 } 00:11:40.928 ] 00:11:40.928 }' 00:11:40.928 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.928 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.928 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.928 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.928 02:27:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:41.874 [2024-11-28 02:27:15.239162] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:41.874 [2024-11-28 02:27:15.239354] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:41.874 [2024-11-28 02:27:15.239413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.874 "name": "raid_bdev1", 00:11:41.874 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:41.874 "strip_size_kb": 0, 00:11:41.874 "state": "online", 00:11:41.874 "raid_level": "raid1", 00:11:41.874 "superblock": false, 00:11:41.874 "num_base_bdevs": 2, 00:11:41.874 "num_base_bdevs_discovered": 2, 00:11:41.874 "num_base_bdevs_operational": 2, 00:11:41.874 "base_bdevs_list": [ 00:11:41.874 { 00:11:41.874 "name": "spare", 00:11:41.874 "uuid": "598e4f03-a877-5377-9d28-8c76e5888d69", 00:11:41.874 "is_configured": true, 00:11:41.874 "data_offset": 0, 00:11:41.874 "data_size": 65536 00:11:41.874 }, 00:11:41.874 { 00:11:41.874 "name": "BaseBdev2", 00:11:41.874 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:41.874 "is_configured": true, 00:11:41.874 "data_offset": 0, 00:11:41.874 "data_size": 65536 00:11:41.874 } 00:11:41.874 ] 00:11:41.874 }' 00:11:41.874 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.134 "name": "raid_bdev1", 00:11:42.134 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:42.134 "strip_size_kb": 0, 00:11:42.134 "state": "online", 00:11:42.134 "raid_level": "raid1", 00:11:42.134 "superblock": false, 00:11:42.134 "num_base_bdevs": 2, 00:11:42.134 "num_base_bdevs_discovered": 2, 00:11:42.134 "num_base_bdevs_operational": 2, 00:11:42.134 "base_bdevs_list": [ 00:11:42.134 { 00:11:42.134 "name": "spare", 00:11:42.134 "uuid": "598e4f03-a877-5377-9d28-8c76e5888d69", 00:11:42.134 "is_configured": true, 00:11:42.134 "data_offset": 0, 00:11:42.134 "data_size": 65536 00:11:42.134 }, 00:11:42.134 { 00:11:42.134 "name": "BaseBdev2", 00:11:42.134 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:42.134 "is_configured": true, 00:11:42.134 "data_offset": 0, 00:11:42.134 "data_size": 65536 00:11:42.134 } 00:11:42.134 ] 00:11:42.134 }' 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.134 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.135 02:27:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.135 02:27:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.135 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.135 02:27:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.135 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.135 "name": "raid_bdev1", 00:11:42.135 "uuid": "a6cb71cc-9b2d-4867-8f7a-805dcc5e379e", 00:11:42.135 "strip_size_kb": 0, 00:11:42.135 "state": "online", 00:11:42.135 "raid_level": "raid1", 00:11:42.135 "superblock": false, 00:11:42.135 "num_base_bdevs": 2, 00:11:42.135 "num_base_bdevs_discovered": 2, 00:11:42.135 "num_base_bdevs_operational": 2, 00:11:42.135 "base_bdevs_list": [ 00:11:42.135 { 00:11:42.135 "name": "spare", 00:11:42.135 "uuid": "598e4f03-a877-5377-9d28-8c76e5888d69", 00:11:42.135 "is_configured": true, 00:11:42.135 "data_offset": 0, 00:11:42.135 "data_size": 65536 00:11:42.135 }, 00:11:42.135 { 00:11:42.135 "name": "BaseBdev2", 00:11:42.135 "uuid": "0c83b04e-3edc-5909-9a9a-b69327803b4c", 00:11:42.135 "is_configured": true, 00:11:42.135 "data_offset": 0, 00:11:42.135 "data_size": 65536 00:11:42.135 } 00:11:42.135 ] 00:11:42.135 }' 00:11:42.135 02:27:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.135 02:27:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.703 02:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.703 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.703 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.703 [2024-11-28 02:27:16.152995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.703 [2024-11-28 02:27:16.153100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.703 [2024-11-28 02:27:16.153293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.703 [2024-11-28 02:27:16.153410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.703 [2024-11-28 02:27:16.153470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:42.704 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:42.968 /dev/nbd0 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.968 1+0 records in 00:11:42.968 1+0 records out 00:11:42.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450191 s, 9.1 MB/s 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:42.968 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:42.968 /dev/nbd1 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.246 1+0 records in 00:11:43.246 1+0 records out 00:11:43.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437001 s, 9.4 MB/s 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.246 02:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:43.507 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:43.507 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:43.507 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:43.507 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.507 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.507 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:43.507 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:43.507 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.507 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.507 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75082 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75082 ']' 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75082 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75082 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75082' 00:11:43.767 killing process with pid 75082 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75082 00:11:43.767 Received shutdown signal, test time was about 60.000000 seconds 00:11:43.767 00:11:43.767 Latency(us) 00:11:43.767 [2024-11-28T02:27:17.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.767 [2024-11-28T02:27:17.446Z] =================================================================================================================== 00:11:43.767 [2024-11-28T02:27:17.446Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:43.767 [2024-11-28 02:27:17.376312] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.767 02:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75082 00:11:44.026 [2024-11-28 02:27:17.665708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:45.404 00:11:45.404 real 0m15.595s 00:11:45.404 user 0m17.443s 00:11:45.404 sys 0m3.054s 00:11:45.404 ************************************ 00:11:45.404 END TEST raid_rebuild_test 00:11:45.404 ************************************ 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.404 02:27:18 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:45.404 02:27:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:45.404 02:27:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.404 02:27:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.404 ************************************ 00:11:45.404 START TEST raid_rebuild_test_sb 00:11:45.404 ************************************ 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75501 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75501 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75501 ']' 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.404 02:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.405 02:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.405 02:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.405 [2024-11-28 02:27:18.913642] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:45.405 [2024-11-28 02:27:18.913837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:45.405 Zero copy mechanism will not be used. 00:11:45.405 -allocations --file-prefix=spdk_pid75501 ] 00:11:45.664 [2024-11-28 02:27:19.088809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.664 [2024-11-28 02:27:19.203887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.923 [2024-11-28 02:27:19.404717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.923 [2024-11-28 02:27:19.404877] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.183 BaseBdev1_malloc 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.183 [2024-11-28 02:27:19.807832] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:46.183 [2024-11-28 02:27:19.807906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.183 [2024-11-28 02:27:19.807945] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:46.183 [2024-11-28 02:27:19.807960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.183 [2024-11-28 02:27:19.810140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.183 [2024-11-28 02:27:19.810186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:46.183 BaseBdev1 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.183 BaseBdev2_malloc 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.183 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.443 [2024-11-28 02:27:19.861690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:46.443 [2024-11-28 02:27:19.861776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.443 [2024-11-28 02:27:19.861801] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:46.443 [2024-11-28 02:27:19.861814] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.443 [2024-11-28 02:27:19.863868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.443 [2024-11-28 02:27:19.863913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:46.443 BaseBdev2 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.443 spare_malloc 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.443 spare_delay 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.443 [2024-11-28 02:27:19.943331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:46.443 [2024-11-28 02:27:19.943449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.443 [2024-11-28 02:27:19.943476] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:46.443 [2024-11-28 02:27:19.943489] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.443 [2024-11-28 02:27:19.945700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.443 [2024-11-28 02:27:19.945748] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:46.443 spare 00:11:46.443 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.444 [2024-11-28 02:27:19.955364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.444 [2024-11-28 02:27:19.957178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.444 [2024-11-28 02:27:19.957358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:46.444 [2024-11-28 02:27:19.957375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.444 [2024-11-28 02:27:19.957614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:46.444 [2024-11-28 02:27:19.957779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:46.444 [2024-11-28 02:27:19.957789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:46.444 [2024-11-28 02:27:19.957955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.444 02:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.444 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.444 "name": "raid_bdev1", 00:11:46.444 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:46.444 "strip_size_kb": 0, 00:11:46.444 "state": "online", 00:11:46.444 "raid_level": "raid1", 00:11:46.444 "superblock": true, 00:11:46.444 "num_base_bdevs": 2, 00:11:46.444 "num_base_bdevs_discovered": 2, 00:11:46.444 "num_base_bdevs_operational": 2, 00:11:46.444 "base_bdevs_list": [ 00:11:46.444 { 00:11:46.444 "name": "BaseBdev1", 00:11:46.444 "uuid": "a9506c6f-c735-5166-9e6e-f35e106ceaa1", 00:11:46.444 "is_configured": true, 00:11:46.444 "data_offset": 2048, 00:11:46.444 "data_size": 63488 00:11:46.444 }, 00:11:46.444 { 00:11:46.444 "name": "BaseBdev2", 00:11:46.444 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:46.444 "is_configured": true, 00:11:46.444 "data_offset": 2048, 00:11:46.444 "data_size": 63488 00:11:46.444 } 00:11:46.444 ] 00:11:46.444 }' 00:11:46.444 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.444 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.013 [2024-11-28 02:27:20.390893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:47.013 [2024-11-28 02:27:20.650251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:47.013 /dev/nbd0 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:47.013 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.274 1+0 records in 00:11:47.274 1+0 records out 00:11:47.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327794 s, 12.5 MB/s 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:47.274 02:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:51.476 63488+0 records in 00:11:51.476 63488+0 records out 00:11:51.476 32505856 bytes (33 MB, 31 MiB) copied, 4.01405 s, 8.1 MB/s 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:51.476 [2024-11-28 02:27:24.920656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.476 [2024-11-28 02:27:24.952703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.476 02:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.476 02:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.476 "name": "raid_bdev1", 00:11:51.476 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:51.476 "strip_size_kb": 0, 00:11:51.476 "state": "online", 00:11:51.476 "raid_level": "raid1", 00:11:51.476 "superblock": true, 00:11:51.476 "num_base_bdevs": 2, 00:11:51.476 "num_base_bdevs_discovered": 1, 00:11:51.476 "num_base_bdevs_operational": 1, 00:11:51.476 "base_bdevs_list": [ 00:11:51.476 { 00:11:51.476 "name": null, 00:11:51.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.476 "is_configured": false, 00:11:51.476 "data_offset": 0, 00:11:51.476 "data_size": 63488 00:11:51.476 }, 00:11:51.476 { 00:11:51.476 "name": "BaseBdev2", 00:11:51.476 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:51.476 "is_configured": true, 00:11:51.476 "data_offset": 2048, 00:11:51.476 "data_size": 63488 00:11:51.476 } 00:11:51.476 ] 00:11:51.476 }' 00:11:51.476 02:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.476 02:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.739 02:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:51.739 02:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.739 02:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.740 [2024-11-28 02:27:25.392025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:51.740 [2024-11-28 02:27:25.409166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:11:51.740 02:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.740 [2024-11-28 02:27:25.411028] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:51.740 02:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.122 "name": "raid_bdev1", 00:11:53.122 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:53.122 "strip_size_kb": 0, 00:11:53.122 "state": "online", 00:11:53.122 "raid_level": "raid1", 00:11:53.122 "superblock": true, 00:11:53.122 "num_base_bdevs": 2, 00:11:53.122 "num_base_bdevs_discovered": 2, 00:11:53.122 "num_base_bdevs_operational": 2, 00:11:53.122 "process": { 00:11:53.122 "type": "rebuild", 00:11:53.122 "target": "spare", 00:11:53.122 "progress": { 00:11:53.122 "blocks": 20480, 00:11:53.122 "percent": 32 00:11:53.122 } 00:11:53.122 }, 00:11:53.122 "base_bdevs_list": [ 00:11:53.122 { 00:11:53.122 "name": "spare", 00:11:53.122 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:11:53.122 "is_configured": true, 00:11:53.122 "data_offset": 2048, 00:11:53.122 "data_size": 63488 00:11:53.122 }, 00:11:53.122 { 00:11:53.122 "name": "BaseBdev2", 00:11:53.122 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:53.122 "is_configured": true, 00:11:53.122 "data_offset": 2048, 00:11:53.122 "data_size": 63488 00:11:53.122 } 00:11:53.122 ] 00:11:53.122 }' 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.122 [2024-11-28 02:27:26.563243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.122 [2024-11-28 02:27:26.616267] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:53.122 [2024-11-28 02:27:26.616336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.122 [2024-11-28 02:27:26.616354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.122 [2024-11-28 02:27:26.616370] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.122 "name": "raid_bdev1", 00:11:53.122 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:53.122 "strip_size_kb": 0, 00:11:53.122 "state": "online", 00:11:53.122 "raid_level": "raid1", 00:11:53.122 "superblock": true, 00:11:53.122 "num_base_bdevs": 2, 00:11:53.122 "num_base_bdevs_discovered": 1, 00:11:53.122 "num_base_bdevs_operational": 1, 00:11:53.122 "base_bdevs_list": [ 00:11:53.122 { 00:11:53.122 "name": null, 00:11:53.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.122 "is_configured": false, 00:11:53.122 "data_offset": 0, 00:11:53.122 "data_size": 63488 00:11:53.122 }, 00:11:53.122 { 00:11:53.122 "name": "BaseBdev2", 00:11:53.122 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:53.122 "is_configured": true, 00:11:53.122 "data_offset": 2048, 00:11:53.122 "data_size": 63488 00:11:53.122 } 00:11:53.122 ] 00:11:53.122 }' 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.122 02:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.382 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:53.382 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.382 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:53.382 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:53.382 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.382 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.382 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.382 02:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.382 02:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.382 02:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.643 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.643 "name": "raid_bdev1", 00:11:53.643 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:53.643 "strip_size_kb": 0, 00:11:53.643 "state": "online", 00:11:53.643 "raid_level": "raid1", 00:11:53.643 "superblock": true, 00:11:53.643 "num_base_bdevs": 2, 00:11:53.643 "num_base_bdevs_discovered": 1, 00:11:53.643 "num_base_bdevs_operational": 1, 00:11:53.643 "base_bdevs_list": [ 00:11:53.643 { 00:11:53.643 "name": null, 00:11:53.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.643 "is_configured": false, 00:11:53.643 "data_offset": 0, 00:11:53.643 "data_size": 63488 00:11:53.643 }, 00:11:53.643 { 00:11:53.643 "name": "BaseBdev2", 00:11:53.643 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:53.643 "is_configured": true, 00:11:53.643 "data_offset": 2048, 00:11:53.643 "data_size": 63488 00:11:53.643 } 00:11:53.643 ] 00:11:53.643 }' 00:11:53.643 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.643 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:53.643 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.643 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:53.643 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:53.643 02:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.643 02:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.643 [2024-11-28 02:27:27.158666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:53.643 [2024-11-28 02:27:27.175333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:11:53.643 02:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.643 02:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:53.643 [2024-11-28 02:27:27.177332] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.583 "name": "raid_bdev1", 00:11:54.583 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:54.583 "strip_size_kb": 0, 00:11:54.583 "state": "online", 00:11:54.583 "raid_level": "raid1", 00:11:54.583 "superblock": true, 00:11:54.583 "num_base_bdevs": 2, 00:11:54.583 "num_base_bdevs_discovered": 2, 00:11:54.583 "num_base_bdevs_operational": 2, 00:11:54.583 "process": { 00:11:54.583 "type": "rebuild", 00:11:54.583 "target": "spare", 00:11:54.583 "progress": { 00:11:54.583 "blocks": 20480, 00:11:54.583 "percent": 32 00:11:54.583 } 00:11:54.583 }, 00:11:54.583 "base_bdevs_list": [ 00:11:54.583 { 00:11:54.583 "name": "spare", 00:11:54.583 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:11:54.583 "is_configured": true, 00:11:54.583 "data_offset": 2048, 00:11:54.583 "data_size": 63488 00:11:54.583 }, 00:11:54.583 { 00:11:54.583 "name": "BaseBdev2", 00:11:54.583 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:54.583 "is_configured": true, 00:11:54.583 "data_offset": 2048, 00:11:54.583 "data_size": 63488 00:11:54.583 } 00:11:54.583 ] 00:11:54.583 }' 00:11:54.583 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:54.843 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=378 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.843 "name": "raid_bdev1", 00:11:54.843 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:54.843 "strip_size_kb": 0, 00:11:54.843 "state": "online", 00:11:54.843 "raid_level": "raid1", 00:11:54.843 "superblock": true, 00:11:54.843 "num_base_bdevs": 2, 00:11:54.843 "num_base_bdevs_discovered": 2, 00:11:54.843 "num_base_bdevs_operational": 2, 00:11:54.843 "process": { 00:11:54.843 "type": "rebuild", 00:11:54.843 "target": "spare", 00:11:54.843 "progress": { 00:11:54.843 "blocks": 22528, 00:11:54.843 "percent": 35 00:11:54.843 } 00:11:54.843 }, 00:11:54.843 "base_bdevs_list": [ 00:11:54.843 { 00:11:54.843 "name": "spare", 00:11:54.843 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:11:54.843 "is_configured": true, 00:11:54.843 "data_offset": 2048, 00:11:54.843 "data_size": 63488 00:11:54.843 }, 00:11:54.843 { 00:11:54.843 "name": "BaseBdev2", 00:11:54.843 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:54.843 "is_configured": true, 00:11:54.843 "data_offset": 2048, 00:11:54.843 "data_size": 63488 00:11:54.843 } 00:11:54.843 ] 00:11:54.843 }' 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:54.843 02:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.226 "name": "raid_bdev1", 00:11:56.226 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:56.226 "strip_size_kb": 0, 00:11:56.226 "state": "online", 00:11:56.226 "raid_level": "raid1", 00:11:56.226 "superblock": true, 00:11:56.226 "num_base_bdevs": 2, 00:11:56.226 "num_base_bdevs_discovered": 2, 00:11:56.226 "num_base_bdevs_operational": 2, 00:11:56.226 "process": { 00:11:56.226 "type": "rebuild", 00:11:56.226 "target": "spare", 00:11:56.226 "progress": { 00:11:56.226 "blocks": 45056, 00:11:56.226 "percent": 70 00:11:56.226 } 00:11:56.226 }, 00:11:56.226 "base_bdevs_list": [ 00:11:56.226 { 00:11:56.226 "name": "spare", 00:11:56.226 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:11:56.226 "is_configured": true, 00:11:56.226 "data_offset": 2048, 00:11:56.226 "data_size": 63488 00:11:56.226 }, 00:11:56.226 { 00:11:56.226 "name": "BaseBdev2", 00:11:56.226 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:56.226 "is_configured": true, 00:11:56.226 "data_offset": 2048, 00:11:56.226 "data_size": 63488 00:11:56.226 } 00:11:56.226 ] 00:11:56.226 }' 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.226 02:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:56.796 [2024-11-28 02:27:30.290460] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:56.796 [2024-11-28 02:27:30.290554] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:56.796 [2024-11-28 02:27:30.290697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.056 "name": "raid_bdev1", 00:11:57.056 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:57.056 "strip_size_kb": 0, 00:11:57.056 "state": "online", 00:11:57.056 "raid_level": "raid1", 00:11:57.056 "superblock": true, 00:11:57.056 "num_base_bdevs": 2, 00:11:57.056 "num_base_bdevs_discovered": 2, 00:11:57.056 "num_base_bdevs_operational": 2, 00:11:57.056 "base_bdevs_list": [ 00:11:57.056 { 00:11:57.056 "name": "spare", 00:11:57.056 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:11:57.056 "is_configured": true, 00:11:57.056 "data_offset": 2048, 00:11:57.056 "data_size": 63488 00:11:57.056 }, 00:11:57.056 { 00:11:57.056 "name": "BaseBdev2", 00:11:57.056 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:57.056 "is_configured": true, 00:11:57.056 "data_offset": 2048, 00:11:57.056 "data_size": 63488 00:11:57.056 } 00:11:57.056 ] 00:11:57.056 }' 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.056 02:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.317 "name": "raid_bdev1", 00:11:57.317 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:57.317 "strip_size_kb": 0, 00:11:57.317 "state": "online", 00:11:57.317 "raid_level": "raid1", 00:11:57.317 "superblock": true, 00:11:57.317 "num_base_bdevs": 2, 00:11:57.317 "num_base_bdevs_discovered": 2, 00:11:57.317 "num_base_bdevs_operational": 2, 00:11:57.317 "base_bdevs_list": [ 00:11:57.317 { 00:11:57.317 "name": "spare", 00:11:57.317 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:11:57.317 "is_configured": true, 00:11:57.317 "data_offset": 2048, 00:11:57.317 "data_size": 63488 00:11:57.317 }, 00:11:57.317 { 00:11:57.317 "name": "BaseBdev2", 00:11:57.317 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:57.317 "is_configured": true, 00:11:57.317 "data_offset": 2048, 00:11:57.317 "data_size": 63488 00:11:57.317 } 00:11:57.317 ] 00:11:57.317 }' 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.317 "name": "raid_bdev1", 00:11:57.317 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:57.317 "strip_size_kb": 0, 00:11:57.317 "state": "online", 00:11:57.317 "raid_level": "raid1", 00:11:57.317 "superblock": true, 00:11:57.317 "num_base_bdevs": 2, 00:11:57.317 "num_base_bdevs_discovered": 2, 00:11:57.317 "num_base_bdevs_operational": 2, 00:11:57.317 "base_bdevs_list": [ 00:11:57.317 { 00:11:57.317 "name": "spare", 00:11:57.317 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:11:57.317 "is_configured": true, 00:11:57.317 "data_offset": 2048, 00:11:57.317 "data_size": 63488 00:11:57.317 }, 00:11:57.317 { 00:11:57.317 "name": "BaseBdev2", 00:11:57.317 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:57.317 "is_configured": true, 00:11:57.317 "data_offset": 2048, 00:11:57.317 "data_size": 63488 00:11:57.317 } 00:11:57.317 ] 00:11:57.317 }' 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.317 02:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.577 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.577 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.577 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.577 [2024-11-28 02:27:31.243636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.577 [2024-11-28 02:27:31.243756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.577 [2024-11-28 02:27:31.243872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.577 [2024-11-28 02:27:31.243998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.577 [2024-11-28 02:27:31.244054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:57.577 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.577 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:57.577 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.577 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.577 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:57.837 /dev/nbd0 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:57.837 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.098 1+0 records in 00:11:58.098 1+0 records out 00:11:58.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462247 s, 8.9 MB/s 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:58.098 /dev/nbd1 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.098 1+0 records in 00:11:58.098 1+0 records out 00:11:58.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215725 s, 19.0 MB/s 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.098 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:58.358 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:58.358 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.358 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.358 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:58.358 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:58.358 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.358 02:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:58.618 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:58.618 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:58.618 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:58.618 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.618 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.618 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:58.618 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:58.618 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.618 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.618 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.878 [2024-11-28 02:27:32.373275] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:58.878 [2024-11-28 02:27:32.373347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.878 [2024-11-28 02:27:32.373378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:58.878 [2024-11-28 02:27:32.373389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.878 [2024-11-28 02:27:32.375627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.878 [2024-11-28 02:27:32.375677] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:58.878 [2024-11-28 02:27:32.375782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:58.878 [2024-11-28 02:27:32.375830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:58.878 [2024-11-28 02:27:32.375997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.878 spare 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.878 [2024-11-28 02:27:32.475932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:58.878 [2024-11-28 02:27:32.475975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.878 [2024-11-28 02:27:32.476305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:58.878 [2024-11-28 02:27:32.476539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:58.878 [2024-11-28 02:27:32.476551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:58.878 [2024-11-28 02:27:32.476755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.878 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.879 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.879 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.879 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.879 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.879 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.879 "name": "raid_bdev1", 00:11:58.879 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:58.879 "strip_size_kb": 0, 00:11:58.879 "state": "online", 00:11:58.879 "raid_level": "raid1", 00:11:58.879 "superblock": true, 00:11:58.879 "num_base_bdevs": 2, 00:11:58.879 "num_base_bdevs_discovered": 2, 00:11:58.879 "num_base_bdevs_operational": 2, 00:11:58.879 "base_bdevs_list": [ 00:11:58.879 { 00:11:58.879 "name": "spare", 00:11:58.879 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:11:58.879 "is_configured": true, 00:11:58.879 "data_offset": 2048, 00:11:58.879 "data_size": 63488 00:11:58.879 }, 00:11:58.879 { 00:11:58.879 "name": "BaseBdev2", 00:11:58.879 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:58.879 "is_configured": true, 00:11:58.879 "data_offset": 2048, 00:11:58.879 "data_size": 63488 00:11:58.879 } 00:11:58.879 ] 00:11:58.879 }' 00:11:58.879 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.879 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.448 "name": "raid_bdev1", 00:11:59.448 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:59.448 "strip_size_kb": 0, 00:11:59.448 "state": "online", 00:11:59.448 "raid_level": "raid1", 00:11:59.448 "superblock": true, 00:11:59.448 "num_base_bdevs": 2, 00:11:59.448 "num_base_bdevs_discovered": 2, 00:11:59.448 "num_base_bdevs_operational": 2, 00:11:59.448 "base_bdevs_list": [ 00:11:59.448 { 00:11:59.448 "name": "spare", 00:11:59.448 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:11:59.448 "is_configured": true, 00:11:59.448 "data_offset": 2048, 00:11:59.448 "data_size": 63488 00:11:59.448 }, 00:11:59.448 { 00:11:59.448 "name": "BaseBdev2", 00:11:59.448 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:59.448 "is_configured": true, 00:11:59.448 "data_offset": 2048, 00:11:59.448 "data_size": 63488 00:11:59.448 } 00:11:59.448 ] 00:11:59.448 }' 00:11:59.448 02:27:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.448 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.448 [2024-11-28 02:27:33.120074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.709 "name": "raid_bdev1", 00:11:59.709 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:11:59.709 "strip_size_kb": 0, 00:11:59.709 "state": "online", 00:11:59.709 "raid_level": "raid1", 00:11:59.709 "superblock": true, 00:11:59.709 "num_base_bdevs": 2, 00:11:59.709 "num_base_bdevs_discovered": 1, 00:11:59.709 "num_base_bdevs_operational": 1, 00:11:59.709 "base_bdevs_list": [ 00:11:59.709 { 00:11:59.709 "name": null, 00:11:59.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.709 "is_configured": false, 00:11:59.709 "data_offset": 0, 00:11:59.709 "data_size": 63488 00:11:59.709 }, 00:11:59.709 { 00:11:59.709 "name": "BaseBdev2", 00:11:59.709 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:11:59.709 "is_configured": true, 00:11:59.709 "data_offset": 2048, 00:11:59.709 "data_size": 63488 00:11:59.709 } 00:11:59.709 ] 00:11:59.709 }' 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.709 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.968 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:59.968 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.968 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.968 [2024-11-28 02:27:33.527478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.968 [2024-11-28 02:27:33.527788] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:59.968 [2024-11-28 02:27:33.527869] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:59.968 [2024-11-28 02:27:33.527964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.968 [2024-11-28 02:27:33.543847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:11:59.968 02:27:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.968 02:27:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:59.968 [2024-11-28 02:27:33.545753] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.950 "name": "raid_bdev1", 00:12:00.950 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:12:00.950 "strip_size_kb": 0, 00:12:00.950 "state": "online", 00:12:00.950 "raid_level": "raid1", 00:12:00.950 "superblock": true, 00:12:00.950 "num_base_bdevs": 2, 00:12:00.950 "num_base_bdevs_discovered": 2, 00:12:00.950 "num_base_bdevs_operational": 2, 00:12:00.950 "process": { 00:12:00.950 "type": "rebuild", 00:12:00.950 "target": "spare", 00:12:00.950 "progress": { 00:12:00.950 "blocks": 20480, 00:12:00.950 "percent": 32 00:12:00.950 } 00:12:00.950 }, 00:12:00.950 "base_bdevs_list": [ 00:12:00.950 { 00:12:00.950 "name": "spare", 00:12:00.950 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:12:00.950 "is_configured": true, 00:12:00.950 "data_offset": 2048, 00:12:00.950 "data_size": 63488 00:12:00.950 }, 00:12:00.950 { 00:12:00.950 "name": "BaseBdev2", 00:12:00.950 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:12:00.950 "is_configured": true, 00:12:00.950 "data_offset": 2048, 00:12:00.950 "data_size": 63488 00:12:00.950 } 00:12:00.950 ] 00:12:00.950 }' 00:12:00.950 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.218 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.218 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.218 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.218 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:01.218 02:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.218 02:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.218 [2024-11-28 02:27:34.673304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.218 [2024-11-28 02:27:34.751523] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:01.218 [2024-11-28 02:27:34.751617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.218 [2024-11-28 02:27:34.751635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.218 [2024-11-28 02:27:34.751646] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:01.218 02:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.218 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:01.218 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.218 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.219 "name": "raid_bdev1", 00:12:01.219 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:12:01.219 "strip_size_kb": 0, 00:12:01.219 "state": "online", 00:12:01.219 "raid_level": "raid1", 00:12:01.219 "superblock": true, 00:12:01.219 "num_base_bdevs": 2, 00:12:01.219 "num_base_bdevs_discovered": 1, 00:12:01.219 "num_base_bdevs_operational": 1, 00:12:01.219 "base_bdevs_list": [ 00:12:01.219 { 00:12:01.219 "name": null, 00:12:01.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.219 "is_configured": false, 00:12:01.219 "data_offset": 0, 00:12:01.219 "data_size": 63488 00:12:01.219 }, 00:12:01.219 { 00:12:01.219 "name": "BaseBdev2", 00:12:01.219 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:12:01.219 "is_configured": true, 00:12:01.219 "data_offset": 2048, 00:12:01.219 "data_size": 63488 00:12:01.219 } 00:12:01.219 ] 00:12:01.219 }' 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.219 02:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.793 02:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:01.793 02:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.793 02:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.793 [2024-11-28 02:27:35.206286] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:01.793 [2024-11-28 02:27:35.206443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.793 [2024-11-28 02:27:35.206490] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:01.793 [2024-11-28 02:27:35.206563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.793 [2024-11-28 02:27:35.207102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.793 [2024-11-28 02:27:35.207184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:01.793 [2024-11-28 02:27:35.207335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:01.793 [2024-11-28 02:27:35.207390] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:01.793 [2024-11-28 02:27:35.207444] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:01.793 [2024-11-28 02:27:35.207499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.793 [2024-11-28 02:27:35.222950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:01.793 spare 00:12:01.793 02:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.793 [2024-11-28 02:27:35.224853] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:01.793 02:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.733 "name": "raid_bdev1", 00:12:02.733 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:12:02.733 "strip_size_kb": 0, 00:12:02.733 "state": "online", 00:12:02.733 "raid_level": "raid1", 00:12:02.733 "superblock": true, 00:12:02.733 "num_base_bdevs": 2, 00:12:02.733 "num_base_bdevs_discovered": 2, 00:12:02.733 "num_base_bdevs_operational": 2, 00:12:02.733 "process": { 00:12:02.733 "type": "rebuild", 00:12:02.733 "target": "spare", 00:12:02.733 "progress": { 00:12:02.733 "blocks": 20480, 00:12:02.733 "percent": 32 00:12:02.733 } 00:12:02.733 }, 00:12:02.733 "base_bdevs_list": [ 00:12:02.733 { 00:12:02.733 "name": "spare", 00:12:02.733 "uuid": "11db0ed6-bd63-5c20-9b6c-6601034c2db0", 00:12:02.733 "is_configured": true, 00:12:02.733 "data_offset": 2048, 00:12:02.733 "data_size": 63488 00:12:02.733 }, 00:12:02.733 { 00:12:02.733 "name": "BaseBdev2", 00:12:02.733 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:12:02.733 "is_configured": true, 00:12:02.733 "data_offset": 2048, 00:12:02.733 "data_size": 63488 00:12:02.733 } 00:12:02.733 ] 00:12:02.733 }' 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:02.733 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.734 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.734 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:02.734 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.734 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.734 [2024-11-28 02:27:36.360556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:02.993 [2024-11-28 02:27:36.430855] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:02.993 [2024-11-28 02:27:36.430983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.993 [2024-11-28 02:27:36.431005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:02.994 [2024-11-28 02:27:36.431014] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.994 "name": "raid_bdev1", 00:12:02.994 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:12:02.994 "strip_size_kb": 0, 00:12:02.994 "state": "online", 00:12:02.994 "raid_level": "raid1", 00:12:02.994 "superblock": true, 00:12:02.994 "num_base_bdevs": 2, 00:12:02.994 "num_base_bdevs_discovered": 1, 00:12:02.994 "num_base_bdevs_operational": 1, 00:12:02.994 "base_bdevs_list": [ 00:12:02.994 { 00:12:02.994 "name": null, 00:12:02.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.994 "is_configured": false, 00:12:02.994 "data_offset": 0, 00:12:02.994 "data_size": 63488 00:12:02.994 }, 00:12:02.994 { 00:12:02.994 "name": "BaseBdev2", 00:12:02.994 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:12:02.994 "is_configured": true, 00:12:02.994 "data_offset": 2048, 00:12:02.994 "data_size": 63488 00:12:02.994 } 00:12:02.994 ] 00:12:02.994 }' 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.994 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.253 "name": "raid_bdev1", 00:12:03.253 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:12:03.253 "strip_size_kb": 0, 00:12:03.253 "state": "online", 00:12:03.253 "raid_level": "raid1", 00:12:03.253 "superblock": true, 00:12:03.253 "num_base_bdevs": 2, 00:12:03.253 "num_base_bdevs_discovered": 1, 00:12:03.253 "num_base_bdevs_operational": 1, 00:12:03.253 "base_bdevs_list": [ 00:12:03.253 { 00:12:03.253 "name": null, 00:12:03.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.253 "is_configured": false, 00:12:03.253 "data_offset": 0, 00:12:03.253 "data_size": 63488 00:12:03.253 }, 00:12:03.253 { 00:12:03.253 "name": "BaseBdev2", 00:12:03.253 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:12:03.253 "is_configured": true, 00:12:03.253 "data_offset": 2048, 00:12:03.253 "data_size": 63488 00:12:03.253 } 00:12:03.253 ] 00:12:03.253 }' 00:12:03.253 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.514 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:03.514 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.514 02:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:03.514 02:27:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:03.514 02:27:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.514 02:27:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.514 02:27:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.514 02:27:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:03.514 02:27:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.514 02:27:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.514 [2024-11-28 02:27:37.021005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:03.514 [2024-11-28 02:27:37.021090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.514 [2024-11-28 02:27:37.021123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:03.514 [2024-11-28 02:27:37.021144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.514 [2024-11-28 02:27:37.021631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.514 [2024-11-28 02:27:37.021661] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:03.514 [2024-11-28 02:27:37.021752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:03.514 [2024-11-28 02:27:37.021770] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:03.514 [2024-11-28 02:27:37.021781] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:03.514 [2024-11-28 02:27:37.021793] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:03.514 BaseBdev1 00:12:03.514 02:27:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.514 02:27:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.454 "name": "raid_bdev1", 00:12:04.454 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:12:04.454 "strip_size_kb": 0, 00:12:04.454 "state": "online", 00:12:04.454 "raid_level": "raid1", 00:12:04.454 "superblock": true, 00:12:04.454 "num_base_bdevs": 2, 00:12:04.454 "num_base_bdevs_discovered": 1, 00:12:04.454 "num_base_bdevs_operational": 1, 00:12:04.454 "base_bdevs_list": [ 00:12:04.454 { 00:12:04.454 "name": null, 00:12:04.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.454 "is_configured": false, 00:12:04.454 "data_offset": 0, 00:12:04.454 "data_size": 63488 00:12:04.454 }, 00:12:04.454 { 00:12:04.454 "name": "BaseBdev2", 00:12:04.454 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:12:04.454 "is_configured": true, 00:12:04.454 "data_offset": 2048, 00:12:04.454 "data_size": 63488 00:12:04.454 } 00:12:04.454 ] 00:12:04.454 }' 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.454 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.024 "name": "raid_bdev1", 00:12:05.024 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:12:05.024 "strip_size_kb": 0, 00:12:05.024 "state": "online", 00:12:05.024 "raid_level": "raid1", 00:12:05.024 "superblock": true, 00:12:05.024 "num_base_bdevs": 2, 00:12:05.024 "num_base_bdevs_discovered": 1, 00:12:05.024 "num_base_bdevs_operational": 1, 00:12:05.024 "base_bdevs_list": [ 00:12:05.024 { 00:12:05.024 "name": null, 00:12:05.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.024 "is_configured": false, 00:12:05.024 "data_offset": 0, 00:12:05.024 "data_size": 63488 00:12:05.024 }, 00:12:05.024 { 00:12:05.024 "name": "BaseBdev2", 00:12:05.024 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:12:05.024 "is_configured": true, 00:12:05.024 "data_offset": 2048, 00:12:05.024 "data_size": 63488 00:12:05.024 } 00:12:05.024 ] 00:12:05.024 }' 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.024 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.024 [2024-11-28 02:27:38.590434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.025 [2024-11-28 02:27:38.590630] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:05.025 [2024-11-28 02:27:38.590657] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:05.025 request: 00:12:05.025 { 00:12:05.025 "base_bdev": "BaseBdev1", 00:12:05.025 "raid_bdev": "raid_bdev1", 00:12:05.025 "method": "bdev_raid_add_base_bdev", 00:12:05.025 "req_id": 1 00:12:05.025 } 00:12:05.025 Got JSON-RPC error response 00:12:05.025 response: 00:12:05.025 { 00:12:05.025 "code": -22, 00:12:05.025 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:05.025 } 00:12:05.025 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:05.025 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:05.025 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:05.025 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:05.025 02:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:05.025 02:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.963 02:27:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.224 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.224 "name": "raid_bdev1", 00:12:06.224 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:12:06.224 "strip_size_kb": 0, 00:12:06.224 "state": "online", 00:12:06.224 "raid_level": "raid1", 00:12:06.224 "superblock": true, 00:12:06.224 "num_base_bdevs": 2, 00:12:06.224 "num_base_bdevs_discovered": 1, 00:12:06.224 "num_base_bdevs_operational": 1, 00:12:06.224 "base_bdevs_list": [ 00:12:06.224 { 00:12:06.224 "name": null, 00:12:06.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.224 "is_configured": false, 00:12:06.224 "data_offset": 0, 00:12:06.224 "data_size": 63488 00:12:06.224 }, 00:12:06.224 { 00:12:06.224 "name": "BaseBdev2", 00:12:06.224 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:12:06.224 "is_configured": true, 00:12:06.224 "data_offset": 2048, 00:12:06.224 "data_size": 63488 00:12:06.224 } 00:12:06.224 ] 00:12:06.224 }' 00:12:06.224 02:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.224 02:27:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.483 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.484 "name": "raid_bdev1", 00:12:06.484 "uuid": "a2398a3c-cf54-49f9-9c42-722d1284becd", 00:12:06.484 "strip_size_kb": 0, 00:12:06.484 "state": "online", 00:12:06.484 "raid_level": "raid1", 00:12:06.484 "superblock": true, 00:12:06.484 "num_base_bdevs": 2, 00:12:06.484 "num_base_bdevs_discovered": 1, 00:12:06.484 "num_base_bdevs_operational": 1, 00:12:06.484 "base_bdevs_list": [ 00:12:06.484 { 00:12:06.484 "name": null, 00:12:06.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.484 "is_configured": false, 00:12:06.484 "data_offset": 0, 00:12:06.484 "data_size": 63488 00:12:06.484 }, 00:12:06.484 { 00:12:06.484 "name": "BaseBdev2", 00:12:06.484 "uuid": "de54809b-7e80-5912-8566-e039bbf1b3b4", 00:12:06.484 "is_configured": true, 00:12:06.484 "data_offset": 2048, 00:12:06.484 "data_size": 63488 00:12:06.484 } 00:12:06.484 ] 00:12:06.484 }' 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.484 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75501 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75501 ']' 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75501 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75501 00:12:06.744 killing process with pid 75501 00:12:06.744 Received shutdown signal, test time was about 60.000000 seconds 00:12:06.744 00:12:06.744 Latency(us) 00:12:06.744 [2024-11-28T02:27:40.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.744 [2024-11-28T02:27:40.423Z] =================================================================================================================== 00:12:06.744 [2024-11-28T02:27:40.423Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75501' 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75501 00:12:06.744 [2024-11-28 02:27:40.234148] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.744 [2024-11-28 02:27:40.234283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.744 02:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75501 00:12:06.744 [2024-11-28 02:27:40.234338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.744 [2024-11-28 02:27:40.234351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:07.003 [2024-11-28 02:27:40.530948] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.944 02:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:07.944 00:12:07.944 real 0m22.801s 00:12:07.944 user 0m27.219s 00:12:07.944 sys 0m3.402s 00:12:07.944 02:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.944 02:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.944 ************************************ 00:12:07.944 END TEST raid_rebuild_test_sb 00:12:07.944 ************************************ 00:12:08.204 02:27:41 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:08.204 02:27:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:08.204 02:27:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.204 02:27:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:08.204 ************************************ 00:12:08.204 START TEST raid_rebuild_test_io 00:12:08.204 ************************************ 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76230 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76230 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76230 ']' 00:12:08.204 02:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.205 02:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.205 02:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.205 02:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.205 02:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.205 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:08.205 Zero copy mechanism will not be used. 00:12:08.205 [2024-11-28 02:27:41.776082] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:08.205 [2024-11-28 02:27:41.776204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76230 ] 00:12:08.587 [2024-11-28 02:27:41.947723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.587 [2024-11-28 02:27:42.057856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.587 [2024-11-28 02:27:42.251443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.587 [2024-11-28 02:27:42.251512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.158 BaseBdev1_malloc 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.158 [2024-11-28 02:27:42.619692] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:09.158 [2024-11-28 02:27:42.619757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.158 [2024-11-28 02:27:42.619783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:09.158 [2024-11-28 02:27:42.619796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.158 [2024-11-28 02:27:42.621889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.158 [2024-11-28 02:27:42.621944] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:09.158 BaseBdev1 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.158 BaseBdev2_malloc 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.158 [2024-11-28 02:27:42.675759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:09.158 [2024-11-28 02:27:42.675823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.158 [2024-11-28 02:27:42.675848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:09.158 [2024-11-28 02:27:42.675861] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.158 [2024-11-28 02:27:42.677895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.158 [2024-11-28 02:27:42.677950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:09.158 BaseBdev2 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.158 spare_malloc 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.158 spare_delay 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.158 [2024-11-28 02:27:42.757149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:09.158 [2024-11-28 02:27:42.757211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.158 [2024-11-28 02:27:42.757232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:09.158 [2024-11-28 02:27:42.757245] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.158 [2024-11-28 02:27:42.759277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.158 [2024-11-28 02:27:42.759332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:09.158 spare 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.158 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.158 [2024-11-28 02:27:42.769185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.158 [2024-11-28 02:27:42.770912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.158 [2024-11-28 02:27:42.771021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:09.158 [2024-11-28 02:27:42.771042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:09.158 [2024-11-28 02:27:42.771303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:09.158 [2024-11-28 02:27:42.771487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:09.158 [2024-11-28 02:27:42.771504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:09.159 [2024-11-28 02:27:42.771651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.159 "name": "raid_bdev1", 00:12:09.159 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:09.159 "strip_size_kb": 0, 00:12:09.159 "state": "online", 00:12:09.159 "raid_level": "raid1", 00:12:09.159 "superblock": false, 00:12:09.159 "num_base_bdevs": 2, 00:12:09.159 "num_base_bdevs_discovered": 2, 00:12:09.159 "num_base_bdevs_operational": 2, 00:12:09.159 "base_bdevs_list": [ 00:12:09.159 { 00:12:09.159 "name": "BaseBdev1", 00:12:09.159 "uuid": "3d296b59-2173-5c5b-9550-90aba12a5b93", 00:12:09.159 "is_configured": true, 00:12:09.159 "data_offset": 0, 00:12:09.159 "data_size": 65536 00:12:09.159 }, 00:12:09.159 { 00:12:09.159 "name": "BaseBdev2", 00:12:09.159 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:09.159 "is_configured": true, 00:12:09.159 "data_offset": 0, 00:12:09.159 "data_size": 65536 00:12:09.159 } 00:12:09.159 ] 00:12:09.159 }' 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.159 02:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.728 [2024-11-28 02:27:43.216719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:09.728 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.729 [2024-11-28 02:27:43.296271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.729 "name": "raid_bdev1", 00:12:09.729 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:09.729 "strip_size_kb": 0, 00:12:09.729 "state": "online", 00:12:09.729 "raid_level": "raid1", 00:12:09.729 "superblock": false, 00:12:09.729 "num_base_bdevs": 2, 00:12:09.729 "num_base_bdevs_discovered": 1, 00:12:09.729 "num_base_bdevs_operational": 1, 00:12:09.729 "base_bdevs_list": [ 00:12:09.729 { 00:12:09.729 "name": null, 00:12:09.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.729 "is_configured": false, 00:12:09.729 "data_offset": 0, 00:12:09.729 "data_size": 65536 00:12:09.729 }, 00:12:09.729 { 00:12:09.729 "name": "BaseBdev2", 00:12:09.729 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:09.729 "is_configured": true, 00:12:09.729 "data_offset": 0, 00:12:09.729 "data_size": 65536 00:12:09.729 } 00:12:09.729 ] 00:12:09.729 }' 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.729 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.729 [2024-11-28 02:27:43.392544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:09.729 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:09.729 Zero copy mechanism will not be used. 00:12:09.729 Running I/O for 60 seconds... 00:12:10.298 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:10.298 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.298 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.298 [2024-11-28 02:27:43.725581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:10.298 02:27:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.298 02:27:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:10.298 [2024-11-28 02:27:43.788792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:10.298 [2024-11-28 02:27:43.790693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:10.298 [2024-11-28 02:27:43.903996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:10.298 [2024-11-28 02:27:43.904515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:10.558 [2024-11-28 02:27:44.105872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:10.558 [2024-11-28 02:27:44.106175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:10.817 [2024-11-28 02:27:44.331419] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:10.817 209.00 IOPS, 627.00 MiB/s [2024-11-28T02:27:44.496Z] [2024-11-28 02:27:44.458246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:10.817 [2024-11-28 02:27:44.458615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.386 "name": "raid_bdev1", 00:12:11.386 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:11.386 "strip_size_kb": 0, 00:12:11.386 "state": "online", 00:12:11.386 "raid_level": "raid1", 00:12:11.386 "superblock": false, 00:12:11.386 "num_base_bdevs": 2, 00:12:11.386 "num_base_bdevs_discovered": 2, 00:12:11.386 "num_base_bdevs_operational": 2, 00:12:11.386 "process": { 00:12:11.386 "type": "rebuild", 00:12:11.386 "target": "spare", 00:12:11.386 "progress": { 00:12:11.386 "blocks": 12288, 00:12:11.386 "percent": 18 00:12:11.386 } 00:12:11.386 }, 00:12:11.386 "base_bdevs_list": [ 00:12:11.386 { 00:12:11.386 "name": "spare", 00:12:11.386 "uuid": "0f66a07e-3919-5df7-ae57-9ef35cff567a", 00:12:11.386 "is_configured": true, 00:12:11.386 "data_offset": 0, 00:12:11.386 "data_size": 65536 00:12:11.386 }, 00:12:11.386 { 00:12:11.386 "name": "BaseBdev2", 00:12:11.386 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:11.386 "is_configured": true, 00:12:11.386 "data_offset": 0, 00:12:11.386 "data_size": 65536 00:12:11.386 } 00:12:11.386 ] 00:12:11.386 }' 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.386 02:27:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.386 [2024-11-28 02:27:44.905511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.386 [2024-11-28 02:27:45.011784] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:11.386 [2024-11-28 02:27:45.019976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.386 [2024-11-28 02:27:45.020037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.386 [2024-11-28 02:27:45.020051] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:11.386 [2024-11-28 02:27:45.061976] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.646 "name": "raid_bdev1", 00:12:11.646 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:11.646 "strip_size_kb": 0, 00:12:11.646 "state": "online", 00:12:11.646 "raid_level": "raid1", 00:12:11.646 "superblock": false, 00:12:11.646 "num_base_bdevs": 2, 00:12:11.646 "num_base_bdevs_discovered": 1, 00:12:11.646 "num_base_bdevs_operational": 1, 00:12:11.646 "base_bdevs_list": [ 00:12:11.646 { 00:12:11.646 "name": null, 00:12:11.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.646 "is_configured": false, 00:12:11.646 "data_offset": 0, 00:12:11.646 "data_size": 65536 00:12:11.646 }, 00:12:11.646 { 00:12:11.646 "name": "BaseBdev2", 00:12:11.646 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:11.646 "is_configured": true, 00:12:11.646 "data_offset": 0, 00:12:11.646 "data_size": 65536 00:12:11.646 } 00:12:11.646 ] 00:12:11.646 }' 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.646 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.905 170.50 IOPS, 511.50 MiB/s [2024-11-28T02:27:45.584Z] 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:11.905 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.905 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:11.905 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:11.905 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.905 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.905 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.905 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.905 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.905 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.905 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.905 "name": "raid_bdev1", 00:12:11.905 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:11.905 "strip_size_kb": 0, 00:12:11.905 "state": "online", 00:12:11.905 "raid_level": "raid1", 00:12:11.905 "superblock": false, 00:12:11.905 "num_base_bdevs": 2, 00:12:11.905 "num_base_bdevs_discovered": 1, 00:12:11.905 "num_base_bdevs_operational": 1, 00:12:11.905 "base_bdevs_list": [ 00:12:11.905 { 00:12:11.905 "name": null, 00:12:11.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.905 "is_configured": false, 00:12:11.905 "data_offset": 0, 00:12:11.905 "data_size": 65536 00:12:11.905 }, 00:12:11.905 { 00:12:11.905 "name": "BaseBdev2", 00:12:11.905 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:11.905 "is_configured": true, 00:12:11.905 "data_offset": 0, 00:12:11.905 "data_size": 65536 00:12:11.905 } 00:12:11.905 ] 00:12:11.905 }' 00:12:11.906 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.906 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:11.906 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.906 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:11.906 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:11.906 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.906 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.165 [2024-11-28 02:27:45.590415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.165 02:27:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.165 02:27:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:12.165 [2024-11-28 02:27:45.665541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:12.165 [2024-11-28 02:27:45.667423] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.165 [2024-11-28 02:27:45.780977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:12.165 [2024-11-28 02:27:45.781557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:12.426 [2024-11-28 02:27:45.994612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:12.426 [2024-11-28 02:27:45.994967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:12.996 159.00 IOPS, 477.00 MiB/s [2024-11-28T02:27:46.675Z] [2024-11-28 02:27:46.431256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:12.996 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.996 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.996 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.996 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.996 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.996 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.996 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.996 02:27:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.996 02:27:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.996 02:27:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.256 "name": "raid_bdev1", 00:12:13.256 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:13.256 "strip_size_kb": 0, 00:12:13.256 "state": "online", 00:12:13.256 "raid_level": "raid1", 00:12:13.256 "superblock": false, 00:12:13.256 "num_base_bdevs": 2, 00:12:13.256 "num_base_bdevs_discovered": 2, 00:12:13.256 "num_base_bdevs_operational": 2, 00:12:13.256 "process": { 00:12:13.256 "type": "rebuild", 00:12:13.256 "target": "spare", 00:12:13.256 "progress": { 00:12:13.256 "blocks": 12288, 00:12:13.256 "percent": 18 00:12:13.256 } 00:12:13.256 }, 00:12:13.256 "base_bdevs_list": [ 00:12:13.256 { 00:12:13.256 "name": "spare", 00:12:13.256 "uuid": "0f66a07e-3919-5df7-ae57-9ef35cff567a", 00:12:13.256 "is_configured": true, 00:12:13.256 "data_offset": 0, 00:12:13.256 "data_size": 65536 00:12:13.256 }, 00:12:13.256 { 00:12:13.256 "name": "BaseBdev2", 00:12:13.256 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:13.256 "is_configured": true, 00:12:13.256 "data_offset": 0, 00:12:13.256 "data_size": 65536 00:12:13.256 } 00:12:13.256 ] 00:12:13.256 }' 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.256 [2024-11-28 02:27:46.762552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=396 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.256 "name": "raid_bdev1", 00:12:13.256 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:13.256 "strip_size_kb": 0, 00:12:13.256 "state": "online", 00:12:13.256 "raid_level": "raid1", 00:12:13.256 "superblock": false, 00:12:13.256 "num_base_bdevs": 2, 00:12:13.256 "num_base_bdevs_discovered": 2, 00:12:13.256 "num_base_bdevs_operational": 2, 00:12:13.256 "process": { 00:12:13.256 "type": "rebuild", 00:12:13.256 "target": "spare", 00:12:13.256 "progress": { 00:12:13.256 "blocks": 14336, 00:12:13.256 "percent": 21 00:12:13.256 } 00:12:13.256 }, 00:12:13.256 "base_bdevs_list": [ 00:12:13.256 { 00:12:13.256 "name": "spare", 00:12:13.256 "uuid": "0f66a07e-3919-5df7-ae57-9ef35cff567a", 00:12:13.256 "is_configured": true, 00:12:13.256 "data_offset": 0, 00:12:13.256 "data_size": 65536 00:12:13.256 }, 00:12:13.256 { 00:12:13.256 "name": "BaseBdev2", 00:12:13.256 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:13.256 "is_configured": true, 00:12:13.256 "data_offset": 0, 00:12:13.256 "data_size": 65536 00:12:13.256 } 00:12:13.256 ] 00:12:13.256 }' 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.256 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.516 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.516 02:27:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:13.516 [2024-11-28 02:27:46.971303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:13.516 [2024-11-28 02:27:46.971656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:13.776 [2024-11-28 02:27:47.302083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:13.776 143.25 IOPS, 429.75 MiB/s [2024-11-28T02:27:47.455Z] [2024-11-28 02:27:47.423660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:14.344 [2024-11-28 02:27:47.754006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:14.344 [2024-11-28 02:27:47.754555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.344 [2024-11-28 02:27:47.976698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:14.344 02:27:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.344 02:27:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.344 "name": "raid_bdev1", 00:12:14.344 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:14.344 "strip_size_kb": 0, 00:12:14.344 "state": "online", 00:12:14.344 "raid_level": "raid1", 00:12:14.344 "superblock": false, 00:12:14.344 "num_base_bdevs": 2, 00:12:14.344 "num_base_bdevs_discovered": 2, 00:12:14.344 "num_base_bdevs_operational": 2, 00:12:14.344 "process": { 00:12:14.344 "type": "rebuild", 00:12:14.344 "target": "spare", 00:12:14.344 "progress": { 00:12:14.344 "blocks": 26624, 00:12:14.344 "percent": 40 00:12:14.344 } 00:12:14.344 }, 00:12:14.344 "base_bdevs_list": [ 00:12:14.344 { 00:12:14.344 "name": "spare", 00:12:14.344 "uuid": "0f66a07e-3919-5df7-ae57-9ef35cff567a", 00:12:14.344 "is_configured": true, 00:12:14.344 "data_offset": 0, 00:12:14.344 "data_size": 65536 00:12:14.344 }, 00:12:14.344 { 00:12:14.344 "name": "BaseBdev2", 00:12:14.344 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:14.344 "is_configured": true, 00:12:14.344 "data_offset": 0, 00:12:14.344 "data_size": 65536 00:12:14.344 } 00:12:14.344 ] 00:12:14.344 }' 00:12:14.344 02:27:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.603 02:27:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.604 02:27:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.604 02:27:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.604 02:27:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:15.122 123.00 IOPS, 369.00 MiB/s [2024-11-28T02:27:48.801Z] [2024-11-28 02:27:48.671508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:15.382 [2024-11-28 02:27:48.984086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:15.641 [2024-11-28 02:27:49.104885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:15.641 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.642 "name": "raid_bdev1", 00:12:15.642 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:15.642 "strip_size_kb": 0, 00:12:15.642 "state": "online", 00:12:15.642 "raid_level": "raid1", 00:12:15.642 "superblock": false, 00:12:15.642 "num_base_bdevs": 2, 00:12:15.642 "num_base_bdevs_discovered": 2, 00:12:15.642 "num_base_bdevs_operational": 2, 00:12:15.642 "process": { 00:12:15.642 "type": "rebuild", 00:12:15.642 "target": "spare", 00:12:15.642 "progress": { 00:12:15.642 "blocks": 47104, 00:12:15.642 "percent": 71 00:12:15.642 } 00:12:15.642 }, 00:12:15.642 "base_bdevs_list": [ 00:12:15.642 { 00:12:15.642 "name": "spare", 00:12:15.642 "uuid": "0f66a07e-3919-5df7-ae57-9ef35cff567a", 00:12:15.642 "is_configured": true, 00:12:15.642 "data_offset": 0, 00:12:15.642 "data_size": 65536 00:12:15.642 }, 00:12:15.642 { 00:12:15.642 "name": "BaseBdev2", 00:12:15.642 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:15.642 "is_configured": true, 00:12:15.642 "data_offset": 0, 00:12:15.642 "data_size": 65536 00:12:15.642 } 00:12:15.642 ] 00:12:15.642 }' 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.642 02:27:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:16.161 109.83 IOPS, 329.50 MiB/s [2024-11-28T02:27:49.840Z] [2024-11-28 02:27:49.755160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:16.729 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.729 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.729 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.729 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.729 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.729 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.729 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.730 02:27:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.730 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.730 02:27:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.730 02:27:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.730 [2024-11-28 02:27:50.299414] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:16.730 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.730 "name": "raid_bdev1", 00:12:16.730 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:16.730 "strip_size_kb": 0, 00:12:16.730 "state": "online", 00:12:16.730 "raid_level": "raid1", 00:12:16.730 "superblock": false, 00:12:16.730 "num_base_bdevs": 2, 00:12:16.730 "num_base_bdevs_discovered": 2, 00:12:16.730 "num_base_bdevs_operational": 2, 00:12:16.730 "process": { 00:12:16.730 "type": "rebuild", 00:12:16.730 "target": "spare", 00:12:16.730 "progress": { 00:12:16.730 "blocks": 63488, 00:12:16.730 "percent": 96 00:12:16.730 } 00:12:16.730 }, 00:12:16.730 "base_bdevs_list": [ 00:12:16.730 { 00:12:16.730 "name": "spare", 00:12:16.730 "uuid": "0f66a07e-3919-5df7-ae57-9ef35cff567a", 00:12:16.730 "is_configured": true, 00:12:16.730 "data_offset": 0, 00:12:16.730 "data_size": 65536 00:12:16.730 }, 00:12:16.730 { 00:12:16.730 "name": "BaseBdev2", 00:12:16.730 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:16.730 "is_configured": true, 00:12:16.730 "data_offset": 0, 00:12:16.730 "data_size": 65536 00:12:16.730 } 00:12:16.730 ] 00:12:16.730 }' 00:12:16.730 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.730 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.730 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.730 100.43 IOPS, 301.29 MiB/s [2024-11-28T02:27:50.409Z] [2024-11-28 02:27:50.399211] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:16.730 [2024-11-28 02:27:50.401374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.989 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.989 02:27:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:17.929 92.88 IOPS, 278.62 MiB/s [2024-11-28T02:27:51.608Z] 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:17.929 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.929 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.929 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.929 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.929 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.929 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.930 "name": "raid_bdev1", 00:12:17.930 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:17.930 "strip_size_kb": 0, 00:12:17.930 "state": "online", 00:12:17.930 "raid_level": "raid1", 00:12:17.930 "superblock": false, 00:12:17.930 "num_base_bdevs": 2, 00:12:17.930 "num_base_bdevs_discovered": 2, 00:12:17.930 "num_base_bdevs_operational": 2, 00:12:17.930 "base_bdevs_list": [ 00:12:17.930 { 00:12:17.930 "name": "spare", 00:12:17.930 "uuid": "0f66a07e-3919-5df7-ae57-9ef35cff567a", 00:12:17.930 "is_configured": true, 00:12:17.930 "data_offset": 0, 00:12:17.930 "data_size": 65536 00:12:17.930 }, 00:12:17.930 { 00:12:17.930 "name": "BaseBdev2", 00:12:17.930 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:17.930 "is_configured": true, 00:12:17.930 "data_offset": 0, 00:12:17.930 "data_size": 65536 00:12:17.930 } 00:12:17.930 ] 00:12:17.930 }' 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.930 "name": "raid_bdev1", 00:12:17.930 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:17.930 "strip_size_kb": 0, 00:12:17.930 "state": "online", 00:12:17.930 "raid_level": "raid1", 00:12:17.930 "superblock": false, 00:12:17.930 "num_base_bdevs": 2, 00:12:17.930 "num_base_bdevs_discovered": 2, 00:12:17.930 "num_base_bdevs_operational": 2, 00:12:17.930 "base_bdevs_list": [ 00:12:17.930 { 00:12:17.930 "name": "spare", 00:12:17.930 "uuid": "0f66a07e-3919-5df7-ae57-9ef35cff567a", 00:12:17.930 "is_configured": true, 00:12:17.930 "data_offset": 0, 00:12:17.930 "data_size": 65536 00:12:17.930 }, 00:12:17.930 { 00:12:17.930 "name": "BaseBdev2", 00:12:17.930 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:17.930 "is_configured": true, 00:12:17.930 "data_offset": 0, 00:12:17.930 "data_size": 65536 00:12:17.930 } 00:12:17.930 ] 00:12:17.930 }' 00:12:17.930 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.191 "name": "raid_bdev1", 00:12:18.191 "uuid": "37475153-68f9-4801-9723-ec45e9442034", 00:12:18.191 "strip_size_kb": 0, 00:12:18.191 "state": "online", 00:12:18.191 "raid_level": "raid1", 00:12:18.191 "superblock": false, 00:12:18.191 "num_base_bdevs": 2, 00:12:18.191 "num_base_bdevs_discovered": 2, 00:12:18.191 "num_base_bdevs_operational": 2, 00:12:18.191 "base_bdevs_list": [ 00:12:18.191 { 00:12:18.191 "name": "spare", 00:12:18.191 "uuid": "0f66a07e-3919-5df7-ae57-9ef35cff567a", 00:12:18.191 "is_configured": true, 00:12:18.191 "data_offset": 0, 00:12:18.191 "data_size": 65536 00:12:18.191 }, 00:12:18.191 { 00:12:18.191 "name": "BaseBdev2", 00:12:18.191 "uuid": "d0efe4a5-ba0b-5afc-a831-d07199158d8f", 00:12:18.191 "is_configured": true, 00:12:18.191 "data_offset": 0, 00:12:18.191 "data_size": 65536 00:12:18.191 } 00:12:18.191 ] 00:12:18.191 }' 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.191 02:27:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.760 [2024-11-28 02:27:52.153526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.760 [2024-11-28 02:27:52.153563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.760 00:12:18.760 Latency(us) 00:12:18.760 [2024-11-28T02:27:52.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.760 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:18.760 raid_bdev1 : 8.84 86.95 260.86 0.00 0.00 16341.20 296.92 112641.79 00:12:18.760 [2024-11-28T02:27:52.439Z] =================================================================================================================== 00:12:18.760 [2024-11-28T02:27:52.439Z] Total : 86.95 260.86 0.00 0.00 16341.20 296.92 112641.79 00:12:18.760 [2024-11-28 02:27:52.241953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.760 [2024-11-28 02:27:52.242019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.760 [2024-11-28 02:27:52.242096] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.760 [2024-11-28 02:27:52.242110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:18.760 { 00:12:18.760 "results": [ 00:12:18.760 { 00:12:18.760 "job": "raid_bdev1", 00:12:18.760 "core_mask": "0x1", 00:12:18.760 "workload": "randrw", 00:12:18.760 "percentage": 50, 00:12:18.760 "status": "finished", 00:12:18.760 "queue_depth": 2, 00:12:18.760 "io_size": 3145728, 00:12:18.760 "runtime": 8.843896, 00:12:18.760 "iops": 86.95262811774359, 00:12:18.760 "mibps": 260.85788435323076, 00:12:18.760 "io_failed": 0, 00:12:18.760 "io_timeout": 0, 00:12:18.760 "avg_latency_us": 16341.198089732598, 00:12:18.760 "min_latency_us": 296.91528384279474, 00:12:18.760 "max_latency_us": 112641.78864628822 00:12:18.760 } 00:12:18.760 ], 00:12:18.760 "core_count": 1 00:12:18.760 } 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:18.760 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:18.761 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.761 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:18.761 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.761 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:18.761 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.761 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:18.761 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.761 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.761 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:19.020 /dev/nbd0 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.020 1+0 records in 00:12:19.020 1+0 records out 00:12:19.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310667 s, 13.2 MB/s 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.020 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.021 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:19.280 /dev/nbd1 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.280 1+0 records in 00:12:19.280 1+0 records out 00:12:19.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034154 s, 12.0 MB/s 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.280 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:19.541 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:19.541 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.541 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:19.541 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.541 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:19.541 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.541 02:27:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.541 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76230 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76230 ']' 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76230 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76230 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.800 killing process with pid 76230 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76230' 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76230 00:12:19.800 Received shutdown signal, test time was about 10.082422 seconds 00:12:19.800 00:12:19.800 Latency(us) 00:12:19.800 [2024-11-28T02:27:53.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.800 [2024-11-28T02:27:53.479Z] =================================================================================================================== 00:12:19.800 [2024-11-28T02:27:53.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:19.800 [2024-11-28 02:27:53.457485] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.800 02:27:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76230 00:12:20.061 [2024-11-28 02:27:53.686735] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.456 02:27:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:21.456 00:12:21.456 real 0m13.163s 00:12:21.456 user 0m16.352s 00:12:21.456 sys 0m1.476s 00:12:21.456 02:27:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.456 02:27:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.456 ************************************ 00:12:21.456 END TEST raid_rebuild_test_io 00:12:21.456 ************************************ 00:12:21.456 02:27:54 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:21.456 02:27:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:21.456 02:27:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.456 02:27:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.456 ************************************ 00:12:21.456 START TEST raid_rebuild_test_sb_io 00:12:21.456 ************************************ 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76620 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76620 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76620 ']' 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.457 02:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.457 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:21.457 Zero copy mechanism will not be used. 00:12:21.457 [2024-11-28 02:27:55.009085] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:21.457 [2024-11-28 02:27:55.009208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76620 ] 00:12:21.715 [2024-11-28 02:27:55.179311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.715 [2024-11-28 02:27:55.288743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.974 [2024-11-28 02:27:55.487541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.974 [2024-11-28 02:27:55.487593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.234 BaseBdev1_malloc 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.234 [2024-11-28 02:27:55.887764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:22.234 [2024-11-28 02:27:55.887832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.234 [2024-11-28 02:27:55.887859] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:22.234 [2024-11-28 02:27:55.887872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.234 [2024-11-28 02:27:55.889892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.234 [2024-11-28 02:27:55.889949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:22.234 BaseBdev1 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.234 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.494 BaseBdev2_malloc 00:12:22.494 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.494 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:22.494 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.494 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.494 [2024-11-28 02:27:55.942043] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:22.494 [2024-11-28 02:27:55.942107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.494 [2024-11-28 02:27:55.942132] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:22.494 [2024-11-28 02:27:55.942146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.494 [2024-11-28 02:27:55.944287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.494 [2024-11-28 02:27:55.944327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:22.494 BaseBdev2 00:12:22.494 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.494 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:22.494 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.494 02:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.494 spare_malloc 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.494 spare_delay 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.494 [2024-11-28 02:27:56.026846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:22.494 [2024-11-28 02:27:56.026909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.494 [2024-11-28 02:27:56.026943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:22.494 [2024-11-28 02:27:56.026956] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.494 [2024-11-28 02:27:56.029006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.494 [2024-11-28 02:27:56.029045] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:22.494 spare 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.494 [2024-11-28 02:27:56.038885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.494 [2024-11-28 02:27:56.040611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.494 [2024-11-28 02:27:56.040793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:22.494 [2024-11-28 02:27:56.040819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.494 [2024-11-28 02:27:56.041076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:22.494 [2024-11-28 02:27:56.041252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:22.494 [2024-11-28 02:27:56.041270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:22.494 [2024-11-28 02:27:56.041426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.494 "name": "raid_bdev1", 00:12:22.494 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:22.494 "strip_size_kb": 0, 00:12:22.494 "state": "online", 00:12:22.494 "raid_level": "raid1", 00:12:22.494 "superblock": true, 00:12:22.494 "num_base_bdevs": 2, 00:12:22.494 "num_base_bdevs_discovered": 2, 00:12:22.494 "num_base_bdevs_operational": 2, 00:12:22.494 "base_bdevs_list": [ 00:12:22.494 { 00:12:22.494 "name": "BaseBdev1", 00:12:22.494 "uuid": "67f17c08-cc21-5a49-afef-e5b27dc56f93", 00:12:22.494 "is_configured": true, 00:12:22.494 "data_offset": 2048, 00:12:22.494 "data_size": 63488 00:12:22.494 }, 00:12:22.494 { 00:12:22.494 "name": "BaseBdev2", 00:12:22.494 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:22.494 "is_configured": true, 00:12:22.494 "data_offset": 2048, 00:12:22.494 "data_size": 63488 00:12:22.494 } 00:12:22.494 ] 00:12:22.494 }' 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.494 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.065 [2024-11-28 02:27:56.466438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.065 [2024-11-28 02:27:56.566001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.065 "name": "raid_bdev1", 00:12:23.065 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:23.065 "strip_size_kb": 0, 00:12:23.065 "state": "online", 00:12:23.065 "raid_level": "raid1", 00:12:23.065 "superblock": true, 00:12:23.065 "num_base_bdevs": 2, 00:12:23.065 "num_base_bdevs_discovered": 1, 00:12:23.065 "num_base_bdevs_operational": 1, 00:12:23.065 "base_bdevs_list": [ 00:12:23.065 { 00:12:23.065 "name": null, 00:12:23.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.065 "is_configured": false, 00:12:23.065 "data_offset": 0, 00:12:23.065 "data_size": 63488 00:12:23.065 }, 00:12:23.065 { 00:12:23.065 "name": "BaseBdev2", 00:12:23.065 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:23.065 "is_configured": true, 00:12:23.065 "data_offset": 2048, 00:12:23.065 "data_size": 63488 00:12:23.065 } 00:12:23.065 ] 00:12:23.065 }' 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.065 02:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.065 [2024-11-28 02:27:56.649432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:23.065 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:23.065 Zero copy mechanism will not be used. 00:12:23.065 Running I/O for 60 seconds... 00:12:23.634 02:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:23.634 02:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.634 02:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.634 [2024-11-28 02:27:57.011647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:23.634 02:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.634 02:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:23.634 [2024-11-28 02:27:57.067448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:23.634 [2024-11-28 02:27:57.069373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:23.634 [2024-11-28 02:27:57.183126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:23.634 [2024-11-28 02:27:57.183680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:23.894 [2024-11-28 02:27:57.398020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:23.894 [2024-11-28 02:27:57.398365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.154 178.00 IOPS, 534.00 MiB/s [2024-11-28T02:27:57.833Z] [2024-11-28 02:27:57.751785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:24.415 [2024-11-28 02:27:57.973978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:24.415 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.415 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.415 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.415 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.415 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.415 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.415 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.415 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.415 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.415 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.675 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.675 "name": "raid_bdev1", 00:12:24.675 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:24.675 "strip_size_kb": 0, 00:12:24.675 "state": "online", 00:12:24.675 "raid_level": "raid1", 00:12:24.675 "superblock": true, 00:12:24.675 "num_base_bdevs": 2, 00:12:24.675 "num_base_bdevs_discovered": 2, 00:12:24.675 "num_base_bdevs_operational": 2, 00:12:24.675 "process": { 00:12:24.675 "type": "rebuild", 00:12:24.675 "target": "spare", 00:12:24.675 "progress": { 00:12:24.675 "blocks": 10240, 00:12:24.675 "percent": 16 00:12:24.675 } 00:12:24.675 }, 00:12:24.675 "base_bdevs_list": [ 00:12:24.675 { 00:12:24.675 "name": "spare", 00:12:24.675 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:24.675 "is_configured": true, 00:12:24.675 "data_offset": 2048, 00:12:24.675 "data_size": 63488 00:12:24.675 }, 00:12:24.675 { 00:12:24.675 "name": "BaseBdev2", 00:12:24.675 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:24.675 "is_configured": true, 00:12:24.675 "data_offset": 2048, 00:12:24.675 "data_size": 63488 00:12:24.675 } 00:12:24.675 ] 00:12:24.675 }' 00:12:24.675 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.675 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.675 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.675 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.675 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:24.675 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.675 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.675 [2024-11-28 02:27:58.214746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:24.675 [2024-11-28 02:27:58.214863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:24.675 [2024-11-28 02:27:58.314774] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:24.675 [2024-11-28 02:27:58.317107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.675 [2024-11-28 02:27:58.317158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:24.675 [2024-11-28 02:27:58.317175] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:24.935 [2024-11-28 02:27:58.358959] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.935 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.935 "name": "raid_bdev1", 00:12:24.935 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:24.935 "strip_size_kb": 0, 00:12:24.935 "state": "online", 00:12:24.935 "raid_level": "raid1", 00:12:24.935 "superblock": true, 00:12:24.935 "num_base_bdevs": 2, 00:12:24.935 "num_base_bdevs_discovered": 1, 00:12:24.935 "num_base_bdevs_operational": 1, 00:12:24.935 "base_bdevs_list": [ 00:12:24.935 { 00:12:24.935 "name": null, 00:12:24.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.935 "is_configured": false, 00:12:24.935 "data_offset": 0, 00:12:24.935 "data_size": 63488 00:12:24.935 }, 00:12:24.935 { 00:12:24.935 "name": "BaseBdev2", 00:12:24.935 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:24.935 "is_configured": true, 00:12:24.935 "data_offset": 2048, 00:12:24.935 "data_size": 63488 00:12:24.935 } 00:12:24.935 ] 00:12:24.935 }' 00:12:24.936 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.936 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.195 161.50 IOPS, 484.50 MiB/s [2024-11-28T02:27:58.875Z] 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.196 "name": "raid_bdev1", 00:12:25.196 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:25.196 "strip_size_kb": 0, 00:12:25.196 "state": "online", 00:12:25.196 "raid_level": "raid1", 00:12:25.196 "superblock": true, 00:12:25.196 "num_base_bdevs": 2, 00:12:25.196 "num_base_bdevs_discovered": 1, 00:12:25.196 "num_base_bdevs_operational": 1, 00:12:25.196 "base_bdevs_list": [ 00:12:25.196 { 00:12:25.196 "name": null, 00:12:25.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.196 "is_configured": false, 00:12:25.196 "data_offset": 0, 00:12:25.196 "data_size": 63488 00:12:25.196 }, 00:12:25.196 { 00:12:25.196 "name": "BaseBdev2", 00:12:25.196 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:25.196 "is_configured": true, 00:12:25.196 "data_offset": 2048, 00:12:25.196 "data_size": 63488 00:12:25.196 } 00:12:25.196 ] 00:12:25.196 }' 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:25.196 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.455 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:25.455 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:25.455 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.455 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.455 [2024-11-28 02:27:58.921496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:25.455 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.455 02:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:25.455 [2024-11-28 02:27:58.987753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:25.455 [2024-11-28 02:27:58.989664] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.456 [2024-11-28 02:27:59.103140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:25.456 [2024-11-28 02:27:59.103688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:25.715 [2024-11-28 02:27:59.305015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:25.715 [2024-11-28 02:27:59.305307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:25.975 [2024-11-28 02:27:59.636118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:26.235 165.00 IOPS, 495.00 MiB/s [2024-11-28T02:27:59.914Z] [2024-11-28 02:27:59.862158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:26.235 [2024-11-28 02:27:59.862545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:26.496 02:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.496 02:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.496 02:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.496 02:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.496 02:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.496 02:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.496 02:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.496 02:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.496 02:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.496 02:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.496 "name": "raid_bdev1", 00:12:26.496 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:26.496 "strip_size_kb": 0, 00:12:26.496 "state": "online", 00:12:26.496 "raid_level": "raid1", 00:12:26.496 "superblock": true, 00:12:26.496 "num_base_bdevs": 2, 00:12:26.496 "num_base_bdevs_discovered": 2, 00:12:26.496 "num_base_bdevs_operational": 2, 00:12:26.496 "process": { 00:12:26.496 "type": "rebuild", 00:12:26.496 "target": "spare", 00:12:26.496 "progress": { 00:12:26.496 "blocks": 10240, 00:12:26.496 "percent": 16 00:12:26.496 } 00:12:26.496 }, 00:12:26.496 "base_bdevs_list": [ 00:12:26.496 { 00:12:26.496 "name": "spare", 00:12:26.496 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:26.496 "is_configured": true, 00:12:26.496 "data_offset": 2048, 00:12:26.496 "data_size": 63488 00:12:26.496 }, 00:12:26.496 { 00:12:26.496 "name": "BaseBdev2", 00:12:26.496 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:26.496 "is_configured": true, 00:12:26.496 "data_offset": 2048, 00:12:26.496 "data_size": 63488 00:12:26.496 } 00:12:26.496 ] 00:12:26.496 }' 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:26.496 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=410 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.496 "name": "raid_bdev1", 00:12:26.496 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:26.496 "strip_size_kb": 0, 00:12:26.496 "state": "online", 00:12:26.496 "raid_level": "raid1", 00:12:26.496 "superblock": true, 00:12:26.496 "num_base_bdevs": 2, 00:12:26.496 "num_base_bdevs_discovered": 2, 00:12:26.496 "num_base_bdevs_operational": 2, 00:12:26.496 "process": { 00:12:26.496 "type": "rebuild", 00:12:26.496 "target": "spare", 00:12:26.496 "progress": { 00:12:26.496 "blocks": 12288, 00:12:26.496 "percent": 19 00:12:26.496 } 00:12:26.496 }, 00:12:26.496 "base_bdevs_list": [ 00:12:26.496 { 00:12:26.496 "name": "spare", 00:12:26.496 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:26.496 "is_configured": true, 00:12:26.496 "data_offset": 2048, 00:12:26.496 "data_size": 63488 00:12:26.496 }, 00:12:26.496 { 00:12:26.496 "name": "BaseBdev2", 00:12:26.496 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:26.496 "is_configured": true, 00:12:26.496 "data_offset": 2048, 00:12:26.496 "data_size": 63488 00:12:26.496 } 00:12:26.496 ] 00:12:26.496 }' 00:12:26.496 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.757 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.757 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.757 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.757 02:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.757 [2024-11-28 02:28:00.289610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:27.276 143.75 IOPS, 431.25 MiB/s [2024-11-28T02:28:00.955Z] [2024-11-28 02:28:00.755235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:27.537 [2024-11-28 02:28:00.970654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:27.537 [2024-11-28 02:28:00.971070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.797 "name": "raid_bdev1", 00:12:27.797 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:27.797 "strip_size_kb": 0, 00:12:27.797 "state": "online", 00:12:27.797 "raid_level": "raid1", 00:12:27.797 "superblock": true, 00:12:27.797 "num_base_bdevs": 2, 00:12:27.797 "num_base_bdevs_discovered": 2, 00:12:27.797 "num_base_bdevs_operational": 2, 00:12:27.797 "process": { 00:12:27.797 "type": "rebuild", 00:12:27.797 "target": "spare", 00:12:27.797 "progress": { 00:12:27.797 "blocks": 30720, 00:12:27.797 "percent": 48 00:12:27.797 } 00:12:27.797 }, 00:12:27.797 "base_bdevs_list": [ 00:12:27.797 { 00:12:27.797 "name": "spare", 00:12:27.797 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:27.797 "is_configured": true, 00:12:27.797 "data_offset": 2048, 00:12:27.797 "data_size": 63488 00:12:27.797 }, 00:12:27.797 { 00:12:27.797 "name": "BaseBdev2", 00:12:27.797 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:27.797 "is_configured": true, 00:12:27.797 "data_offset": 2048, 00:12:27.797 "data_size": 63488 00:12:27.797 } 00:12:27.797 ] 00:12:27.797 }' 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.797 02:28:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.056 127.00 IOPS, 381.00 MiB/s [2024-11-28T02:28:01.735Z] [2024-11-28 02:28:01.657668] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.027 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.027 "name": "raid_bdev1", 00:12:29.027 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:29.027 "strip_size_kb": 0, 00:12:29.027 "state": "online", 00:12:29.027 "raid_level": "raid1", 00:12:29.027 "superblock": true, 00:12:29.027 "num_base_bdevs": 2, 00:12:29.027 "num_base_bdevs_discovered": 2, 00:12:29.027 "num_base_bdevs_operational": 2, 00:12:29.027 "process": { 00:12:29.027 "type": "rebuild", 00:12:29.027 "target": "spare", 00:12:29.027 "progress": { 00:12:29.027 "blocks": 49152, 00:12:29.027 "percent": 77 00:12:29.027 } 00:12:29.027 }, 00:12:29.027 "base_bdevs_list": [ 00:12:29.027 { 00:12:29.027 "name": "spare", 00:12:29.027 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:29.027 "is_configured": true, 00:12:29.028 "data_offset": 2048, 00:12:29.028 "data_size": 63488 00:12:29.028 }, 00:12:29.028 { 00:12:29.028 "name": "BaseBdev2", 00:12:29.028 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:29.028 "is_configured": true, 00:12:29.028 "data_offset": 2048, 00:12:29.028 "data_size": 63488 00:12:29.028 } 00:12:29.028 ] 00:12:29.028 }' 00:12:29.028 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.028 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.028 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.028 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.028 02:28:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.605 113.33 IOPS, 340.00 MiB/s [2024-11-28T02:28:03.284Z] [2024-11-28 02:28:03.094876] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:29.605 [2024-11-28 02:28:03.199712] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:29.605 [2024-11-28 02:28:03.201443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.864 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.864 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.864 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.864 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.864 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.864 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.864 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.865 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.865 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.865 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.865 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.124 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.124 "name": "raid_bdev1", 00:12:30.124 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:30.124 "strip_size_kb": 0, 00:12:30.124 "state": "online", 00:12:30.124 "raid_level": "raid1", 00:12:30.124 "superblock": true, 00:12:30.125 "num_base_bdevs": 2, 00:12:30.125 "num_base_bdevs_discovered": 2, 00:12:30.125 "num_base_bdevs_operational": 2, 00:12:30.125 "base_bdevs_list": [ 00:12:30.125 { 00:12:30.125 "name": "spare", 00:12:30.125 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:30.125 "is_configured": true, 00:12:30.125 "data_offset": 2048, 00:12:30.125 "data_size": 63488 00:12:30.125 }, 00:12:30.125 { 00:12:30.125 "name": "BaseBdev2", 00:12:30.125 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:30.125 "is_configured": true, 00:12:30.125 "data_offset": 2048, 00:12:30.125 "data_size": 63488 00:12:30.125 } 00:12:30.125 ] 00:12:30.125 }' 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.125 101.57 IOPS, 304.71 MiB/s [2024-11-28T02:28:03.804Z] 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.125 "name": "raid_bdev1", 00:12:30.125 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:30.125 "strip_size_kb": 0, 00:12:30.125 "state": "online", 00:12:30.125 "raid_level": "raid1", 00:12:30.125 "superblock": true, 00:12:30.125 "num_base_bdevs": 2, 00:12:30.125 "num_base_bdevs_discovered": 2, 00:12:30.125 "num_base_bdevs_operational": 2, 00:12:30.125 "base_bdevs_list": [ 00:12:30.125 { 00:12:30.125 "name": "spare", 00:12:30.125 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:30.125 "is_configured": true, 00:12:30.125 "data_offset": 2048, 00:12:30.125 "data_size": 63488 00:12:30.125 }, 00:12:30.125 { 00:12:30.125 "name": "BaseBdev2", 00:12:30.125 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:30.125 "is_configured": true, 00:12:30.125 "data_offset": 2048, 00:12:30.125 "data_size": 63488 00:12:30.125 } 00:12:30.125 ] 00:12:30.125 }' 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.125 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.385 "name": "raid_bdev1", 00:12:30.385 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:30.385 "strip_size_kb": 0, 00:12:30.385 "state": "online", 00:12:30.385 "raid_level": "raid1", 00:12:30.385 "superblock": true, 00:12:30.385 "num_base_bdevs": 2, 00:12:30.385 "num_base_bdevs_discovered": 2, 00:12:30.385 "num_base_bdevs_operational": 2, 00:12:30.385 "base_bdevs_list": [ 00:12:30.385 { 00:12:30.385 "name": "spare", 00:12:30.385 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:30.385 "is_configured": true, 00:12:30.385 "data_offset": 2048, 00:12:30.385 "data_size": 63488 00:12:30.385 }, 00:12:30.385 { 00:12:30.385 "name": "BaseBdev2", 00:12:30.385 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:30.385 "is_configured": true, 00:12:30.385 "data_offset": 2048, 00:12:30.385 "data_size": 63488 00:12:30.385 } 00:12:30.385 ] 00:12:30.385 }' 00:12:30.385 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.386 02:28:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.646 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.646 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.646 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.646 [2024-11-28 02:28:04.184764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.646 [2024-11-28 02:28:04.184855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.646 00:12:30.646 Latency(us) 00:12:30.646 [2024-11-28T02:28:04.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.646 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:30.646 raid_bdev1 : 7.62 95.10 285.29 0.00 0.00 14548.62 314.80 114015.47 00:12:30.646 [2024-11-28T02:28:04.325Z] =================================================================================================================== 00:12:30.646 [2024-11-28T02:28:04.325Z] Total : 95.10 285.29 0.00 0.00 14548.62 314.80 114015.47 00:12:30.646 [2024-11-28 02:28:04.281283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.646 [2024-11-28 02:28:04.281417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.646 [2024-11-28 02:28:04.281512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.646 [2024-11-28 02:28:04.281572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:30.646 { 00:12:30.646 "results": [ 00:12:30.646 { 00:12:30.646 "job": "raid_bdev1", 00:12:30.646 "core_mask": "0x1", 00:12:30.646 "workload": "randrw", 00:12:30.646 "percentage": 50, 00:12:30.646 "status": "finished", 00:12:30.646 "queue_depth": 2, 00:12:30.646 "io_size": 3145728, 00:12:30.646 "runtime": 7.623827, 00:12:30.646 "iops": 95.09659649936967, 00:12:30.646 "mibps": 285.28978949810903, 00:12:30.646 "io_failed": 0, 00:12:30.646 "io_timeout": 0, 00:12:30.646 "avg_latency_us": 14548.620701701551, 00:12:30.646 "min_latency_us": 314.80174672489085, 00:12:30.646 "max_latency_us": 114015.46899563319 00:12:30.646 } 00:12:30.646 ], 00:12:30.646 "core_count": 1 00:12:30.646 } 00:12:30.646 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.646 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:30.646 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.646 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.646 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.646 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:30.906 /dev/nbd0 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.906 1+0 records in 00:12:30.906 1+0 records out 00:12:30.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509069 s, 8.0 MB/s 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.906 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:31.166 /dev/nbd1 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.166 1+0 records in 00:12:31.166 1+0 records out 00:12:31.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532146 s, 7.7 MB/s 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:31.166 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.426 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:31.426 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:31.426 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.426 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:31.426 02:28:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:31.426 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:31.426 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.426 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:31.426 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:31.426 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:31.426 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.426 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.685 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.945 [2024-11-28 02:28:05.475214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:31.945 [2024-11-28 02:28:05.475281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.945 [2024-11-28 02:28:05.475337] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:31.945 [2024-11-28 02:28:05.475352] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.945 [2024-11-28 02:28:05.477522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.945 [2024-11-28 02:28:05.477569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:31.945 [2024-11-28 02:28:05.477667] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:31.945 [2024-11-28 02:28:05.477741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.945 [2024-11-28 02:28:05.477936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.945 spare 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.945 [2024-11-28 02:28:05.577854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:31.945 [2024-11-28 02:28:05.577885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.945 [2024-11-28 02:28:05.578177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:31.945 [2024-11-28 02:28:05.578361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:31.945 [2024-11-28 02:28:05.578385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:31.945 [2024-11-28 02:28:05.578555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.945 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.205 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.205 "name": "raid_bdev1", 00:12:32.205 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:32.205 "strip_size_kb": 0, 00:12:32.205 "state": "online", 00:12:32.205 "raid_level": "raid1", 00:12:32.205 "superblock": true, 00:12:32.205 "num_base_bdevs": 2, 00:12:32.205 "num_base_bdevs_discovered": 2, 00:12:32.205 "num_base_bdevs_operational": 2, 00:12:32.205 "base_bdevs_list": [ 00:12:32.205 { 00:12:32.205 "name": "spare", 00:12:32.205 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:32.205 "is_configured": true, 00:12:32.205 "data_offset": 2048, 00:12:32.205 "data_size": 63488 00:12:32.205 }, 00:12:32.205 { 00:12:32.205 "name": "BaseBdev2", 00:12:32.205 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:32.205 "is_configured": true, 00:12:32.205 "data_offset": 2048, 00:12:32.205 "data_size": 63488 00:12:32.205 } 00:12:32.205 ] 00:12:32.205 }' 00:12:32.205 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.205 02:28:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.465 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.465 "name": "raid_bdev1", 00:12:32.465 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:32.465 "strip_size_kb": 0, 00:12:32.465 "state": "online", 00:12:32.465 "raid_level": "raid1", 00:12:32.465 "superblock": true, 00:12:32.465 "num_base_bdevs": 2, 00:12:32.465 "num_base_bdevs_discovered": 2, 00:12:32.465 "num_base_bdevs_operational": 2, 00:12:32.465 "base_bdevs_list": [ 00:12:32.465 { 00:12:32.465 "name": "spare", 00:12:32.465 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:32.465 "is_configured": true, 00:12:32.465 "data_offset": 2048, 00:12:32.465 "data_size": 63488 00:12:32.465 }, 00:12:32.465 { 00:12:32.465 "name": "BaseBdev2", 00:12:32.466 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:32.466 "is_configured": true, 00:12:32.466 "data_offset": 2048, 00:12:32.466 "data_size": 63488 00:12:32.466 } 00:12:32.466 ] 00:12:32.466 }' 00:12:32.466 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.466 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.466 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 [2024-11-28 02:28:06.202254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.726 "name": "raid_bdev1", 00:12:32.726 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:32.726 "strip_size_kb": 0, 00:12:32.726 "state": "online", 00:12:32.726 "raid_level": "raid1", 00:12:32.726 "superblock": true, 00:12:32.726 "num_base_bdevs": 2, 00:12:32.726 "num_base_bdevs_discovered": 1, 00:12:32.726 "num_base_bdevs_operational": 1, 00:12:32.726 "base_bdevs_list": [ 00:12:32.726 { 00:12:32.726 "name": null, 00:12:32.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.726 "is_configured": false, 00:12:32.726 "data_offset": 0, 00:12:32.726 "data_size": 63488 00:12:32.726 }, 00:12:32.726 { 00:12:32.726 "name": "BaseBdev2", 00:12:32.726 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:32.726 "is_configured": true, 00:12:32.726 "data_offset": 2048, 00:12:32.726 "data_size": 63488 00:12:32.726 } 00:12:32.726 ] 00:12:32.726 }' 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.726 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.986 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:32.986 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.986 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.986 [2024-11-28 02:28:06.629646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.986 [2024-11-28 02:28:06.629861] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:32.986 [2024-11-28 02:28:06.629884] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:32.986 [2024-11-28 02:28:06.629952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.986 [2024-11-28 02:28:06.646456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:12:32.986 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.986 02:28:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:32.986 [2024-11-28 02:28:06.648367] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.368 "name": "raid_bdev1", 00:12:34.368 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:34.368 "strip_size_kb": 0, 00:12:34.368 "state": "online", 00:12:34.368 "raid_level": "raid1", 00:12:34.368 "superblock": true, 00:12:34.368 "num_base_bdevs": 2, 00:12:34.368 "num_base_bdevs_discovered": 2, 00:12:34.368 "num_base_bdevs_operational": 2, 00:12:34.368 "process": { 00:12:34.368 "type": "rebuild", 00:12:34.368 "target": "spare", 00:12:34.368 "progress": { 00:12:34.368 "blocks": 20480, 00:12:34.368 "percent": 32 00:12:34.368 } 00:12:34.368 }, 00:12:34.368 "base_bdevs_list": [ 00:12:34.368 { 00:12:34.368 "name": "spare", 00:12:34.368 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:34.368 "is_configured": true, 00:12:34.368 "data_offset": 2048, 00:12:34.368 "data_size": 63488 00:12:34.368 }, 00:12:34.368 { 00:12:34.368 "name": "BaseBdev2", 00:12:34.368 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:34.368 "is_configured": true, 00:12:34.368 "data_offset": 2048, 00:12:34.368 "data_size": 63488 00:12:34.368 } 00:12:34.368 ] 00:12:34.368 }' 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.368 [2024-11-28 02:28:07.785550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.368 [2024-11-28 02:28:07.853373] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:34.368 [2024-11-28 02:28:07.853438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.368 [2024-11-28 02:28:07.853475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.368 [2024-11-28 02:28:07.853484] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.368 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.369 "name": "raid_bdev1", 00:12:34.369 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:34.369 "strip_size_kb": 0, 00:12:34.369 "state": "online", 00:12:34.369 "raid_level": "raid1", 00:12:34.369 "superblock": true, 00:12:34.369 "num_base_bdevs": 2, 00:12:34.369 "num_base_bdevs_discovered": 1, 00:12:34.369 "num_base_bdevs_operational": 1, 00:12:34.369 "base_bdevs_list": [ 00:12:34.369 { 00:12:34.369 "name": null, 00:12:34.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.369 "is_configured": false, 00:12:34.369 "data_offset": 0, 00:12:34.369 "data_size": 63488 00:12:34.369 }, 00:12:34.369 { 00:12:34.369 "name": "BaseBdev2", 00:12:34.369 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:34.369 "is_configured": true, 00:12:34.369 "data_offset": 2048, 00:12:34.369 "data_size": 63488 00:12:34.369 } 00:12:34.369 ] 00:12:34.369 }' 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.369 02:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.939 02:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:34.939 02:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.939 02:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.939 [2024-11-28 02:28:08.326964] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:34.939 [2024-11-28 02:28:08.327041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.939 [2024-11-28 02:28:08.327071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:34.939 [2024-11-28 02:28:08.327083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.939 [2024-11-28 02:28:08.327620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.939 [2024-11-28 02:28:08.327658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:34.939 [2024-11-28 02:28:08.327783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:34.939 [2024-11-28 02:28:08.327803] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:34.939 [2024-11-28 02:28:08.327818] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:34.939 [2024-11-28 02:28:08.327853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:34.939 [2024-11-28 02:28:08.344668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:12:34.939 spare 00:12:34.939 02:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.939 [2024-11-28 02:28:08.346472] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:34.939 02:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.878 "name": "raid_bdev1", 00:12:35.878 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:35.878 "strip_size_kb": 0, 00:12:35.878 "state": "online", 00:12:35.878 "raid_level": "raid1", 00:12:35.878 "superblock": true, 00:12:35.878 "num_base_bdevs": 2, 00:12:35.878 "num_base_bdevs_discovered": 2, 00:12:35.878 "num_base_bdevs_operational": 2, 00:12:35.878 "process": { 00:12:35.878 "type": "rebuild", 00:12:35.878 "target": "spare", 00:12:35.878 "progress": { 00:12:35.878 "blocks": 20480, 00:12:35.878 "percent": 32 00:12:35.878 } 00:12:35.878 }, 00:12:35.878 "base_bdevs_list": [ 00:12:35.878 { 00:12:35.878 "name": "spare", 00:12:35.878 "uuid": "a648734b-e8ba-5723-a7ec-6b3664584761", 00:12:35.878 "is_configured": true, 00:12:35.878 "data_offset": 2048, 00:12:35.878 "data_size": 63488 00:12:35.878 }, 00:12:35.878 { 00:12:35.878 "name": "BaseBdev2", 00:12:35.878 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:35.878 "is_configured": true, 00:12:35.878 "data_offset": 2048, 00:12:35.878 "data_size": 63488 00:12:35.878 } 00:12:35.878 ] 00:12:35.878 }' 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.878 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.878 [2024-11-28 02:28:09.478067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.878 [2024-11-28 02:28:09.552229] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:35.878 [2024-11-28 02:28:09.552331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.878 [2024-11-28 02:28:09.552348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.878 [2024-11-28 02:28:09.552359] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.138 "name": "raid_bdev1", 00:12:36.138 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:36.138 "strip_size_kb": 0, 00:12:36.138 "state": "online", 00:12:36.138 "raid_level": "raid1", 00:12:36.138 "superblock": true, 00:12:36.138 "num_base_bdevs": 2, 00:12:36.138 "num_base_bdevs_discovered": 1, 00:12:36.138 "num_base_bdevs_operational": 1, 00:12:36.138 "base_bdevs_list": [ 00:12:36.138 { 00:12:36.138 "name": null, 00:12:36.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.138 "is_configured": false, 00:12:36.138 "data_offset": 0, 00:12:36.138 "data_size": 63488 00:12:36.138 }, 00:12:36.138 { 00:12:36.138 "name": "BaseBdev2", 00:12:36.138 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:36.138 "is_configured": true, 00:12:36.138 "data_offset": 2048, 00:12:36.138 "data_size": 63488 00:12:36.138 } 00:12:36.138 ] 00:12:36.138 }' 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.138 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.399 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.399 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.399 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.399 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.399 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.399 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.399 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.399 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.399 02:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.399 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.399 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.399 "name": "raid_bdev1", 00:12:36.399 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:36.399 "strip_size_kb": 0, 00:12:36.399 "state": "online", 00:12:36.399 "raid_level": "raid1", 00:12:36.399 "superblock": true, 00:12:36.399 "num_base_bdevs": 2, 00:12:36.399 "num_base_bdevs_discovered": 1, 00:12:36.399 "num_base_bdevs_operational": 1, 00:12:36.399 "base_bdevs_list": [ 00:12:36.399 { 00:12:36.399 "name": null, 00:12:36.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.399 "is_configured": false, 00:12:36.399 "data_offset": 0, 00:12:36.399 "data_size": 63488 00:12:36.399 }, 00:12:36.399 { 00:12:36.399 "name": "BaseBdev2", 00:12:36.399 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:36.399 "is_configured": true, 00:12:36.399 "data_offset": 2048, 00:12:36.399 "data_size": 63488 00:12:36.399 } 00:12:36.399 ] 00:12:36.399 }' 00:12:36.399 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.658 [2024-11-28 02:28:10.129569] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:36.658 [2024-11-28 02:28:10.129641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.658 [2024-11-28 02:28:10.129671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:36.658 [2024-11-28 02:28:10.129688] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.658 [2024-11-28 02:28:10.130170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.658 [2024-11-28 02:28:10.130211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.658 [2024-11-28 02:28:10.130298] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:36.658 [2024-11-28 02:28:10.130324] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:36.658 [2024-11-28 02:28:10.130334] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:36.658 [2024-11-28 02:28:10.130348] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:36.658 BaseBdev1 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.658 02:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.596 "name": "raid_bdev1", 00:12:37.596 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:37.596 "strip_size_kb": 0, 00:12:37.596 "state": "online", 00:12:37.596 "raid_level": "raid1", 00:12:37.596 "superblock": true, 00:12:37.596 "num_base_bdevs": 2, 00:12:37.596 "num_base_bdevs_discovered": 1, 00:12:37.596 "num_base_bdevs_operational": 1, 00:12:37.596 "base_bdevs_list": [ 00:12:37.596 { 00:12:37.596 "name": null, 00:12:37.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.596 "is_configured": false, 00:12:37.596 "data_offset": 0, 00:12:37.596 "data_size": 63488 00:12:37.596 }, 00:12:37.596 { 00:12:37.596 "name": "BaseBdev2", 00:12:37.596 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:37.596 "is_configured": true, 00:12:37.596 "data_offset": 2048, 00:12:37.596 "data_size": 63488 00:12:37.596 } 00:12:37.596 ] 00:12:37.596 }' 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.596 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.166 "name": "raid_bdev1", 00:12:38.166 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:38.166 "strip_size_kb": 0, 00:12:38.166 "state": "online", 00:12:38.166 "raid_level": "raid1", 00:12:38.166 "superblock": true, 00:12:38.166 "num_base_bdevs": 2, 00:12:38.166 "num_base_bdevs_discovered": 1, 00:12:38.166 "num_base_bdevs_operational": 1, 00:12:38.166 "base_bdevs_list": [ 00:12:38.166 { 00:12:38.166 "name": null, 00:12:38.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.166 "is_configured": false, 00:12:38.166 "data_offset": 0, 00:12:38.166 "data_size": 63488 00:12:38.166 }, 00:12:38.166 { 00:12:38.166 "name": "BaseBdev2", 00:12:38.166 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:38.166 "is_configured": true, 00:12:38.166 "data_offset": 2048, 00:12:38.166 "data_size": 63488 00:12:38.166 } 00:12:38.166 ] 00:12:38.166 }' 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:38.166 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.167 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.167 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.167 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.167 [2024-11-28 02:28:11.779068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.167 [2024-11-28 02:28:11.779256] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:38.167 [2024-11-28 02:28:11.779276] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:38.167 request: 00:12:38.167 { 00:12:38.167 "base_bdev": "BaseBdev1", 00:12:38.167 "raid_bdev": "raid_bdev1", 00:12:38.167 "method": "bdev_raid_add_base_bdev", 00:12:38.167 "req_id": 1 00:12:38.167 } 00:12:38.167 Got JSON-RPC error response 00:12:38.167 response: 00:12:38.167 { 00:12:38.167 "code": -22, 00:12:38.167 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:38.167 } 00:12:38.167 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:38.167 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:38.167 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:38.167 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:38.167 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:38.167 02:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:39.125 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:39.125 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.125 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.125 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.125 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.125 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:39.125 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.125 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.126 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.126 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.126 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.126 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.126 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.126 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.386 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.386 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.386 "name": "raid_bdev1", 00:12:39.386 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:39.386 "strip_size_kb": 0, 00:12:39.386 "state": "online", 00:12:39.386 "raid_level": "raid1", 00:12:39.386 "superblock": true, 00:12:39.386 "num_base_bdevs": 2, 00:12:39.386 "num_base_bdevs_discovered": 1, 00:12:39.386 "num_base_bdevs_operational": 1, 00:12:39.386 "base_bdevs_list": [ 00:12:39.386 { 00:12:39.386 "name": null, 00:12:39.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.386 "is_configured": false, 00:12:39.386 "data_offset": 0, 00:12:39.386 "data_size": 63488 00:12:39.386 }, 00:12:39.386 { 00:12:39.386 "name": "BaseBdev2", 00:12:39.386 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:39.386 "is_configured": true, 00:12:39.386 "data_offset": 2048, 00:12:39.386 "data_size": 63488 00:12:39.386 } 00:12:39.386 ] 00:12:39.386 }' 00:12:39.386 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.386 02:28:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.646 "name": "raid_bdev1", 00:12:39.646 "uuid": "43b4e47d-b232-4bdb-b39b-ab67ab45a20b", 00:12:39.646 "strip_size_kb": 0, 00:12:39.646 "state": "online", 00:12:39.646 "raid_level": "raid1", 00:12:39.646 "superblock": true, 00:12:39.646 "num_base_bdevs": 2, 00:12:39.646 "num_base_bdevs_discovered": 1, 00:12:39.646 "num_base_bdevs_operational": 1, 00:12:39.646 "base_bdevs_list": [ 00:12:39.646 { 00:12:39.646 "name": null, 00:12:39.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.646 "is_configured": false, 00:12:39.646 "data_offset": 0, 00:12:39.646 "data_size": 63488 00:12:39.646 }, 00:12:39.646 { 00:12:39.646 "name": "BaseBdev2", 00:12:39.646 "uuid": "31d18227-86ee-5b8d-ae01-2100fc7730f9", 00:12:39.646 "is_configured": true, 00:12:39.646 "data_offset": 2048, 00:12:39.646 "data_size": 63488 00:12:39.646 } 00:12:39.646 ] 00:12:39.646 }' 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.646 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76620 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76620 ']' 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76620 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76620 00:12:39.906 killing process with pid 76620 00:12:39.906 Received shutdown signal, test time was about 16.762737 seconds 00:12:39.906 00:12:39.906 Latency(us) 00:12:39.906 [2024-11-28T02:28:13.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.906 [2024-11-28T02:28:13.585Z] =================================================================================================================== 00:12:39.906 [2024-11-28T02:28:13.585Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76620' 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76620 00:12:39.906 [2024-11-28 02:28:13.381877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.906 [2024-11-28 02:28:13.382023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.906 02:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76620 00:12:39.906 [2024-11-28 02:28:13.382083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.906 [2024-11-28 02:28:13.382094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:40.166 [2024-11-28 02:28:13.598457] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.106 02:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:41.106 00:12:41.106 real 0m19.810s 00:12:41.106 user 0m25.635s 00:12:41.106 sys 0m2.220s 00:12:41.106 02:28:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.106 02:28:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.106 ************************************ 00:12:41.106 END TEST raid_rebuild_test_sb_io 00:12:41.106 ************************************ 00:12:41.366 02:28:14 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:41.366 02:28:14 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:41.367 02:28:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:41.367 02:28:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.367 02:28:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.367 ************************************ 00:12:41.367 START TEST raid_rebuild_test 00:12:41.367 ************************************ 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77308 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77308 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77308 ']' 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.367 02:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.367 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:41.367 Zero copy mechanism will not be used. 00:12:41.367 [2024-11-28 02:28:14.901698] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:41.367 [2024-11-28 02:28:14.901828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77308 ] 00:12:41.630 [2024-11-28 02:28:15.051951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.630 [2024-11-28 02:28:15.162243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.889 [2024-11-28 02:28:15.354883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.889 [2024-11-28 02:28:15.354950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.149 BaseBdev1_malloc 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.149 [2024-11-28 02:28:15.780244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:42.149 [2024-11-28 02:28:15.780321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.149 [2024-11-28 02:28:15.780362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:42.149 [2024-11-28 02:28:15.780376] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.149 [2024-11-28 02:28:15.782464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.149 [2024-11-28 02:28:15.782508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:42.149 BaseBdev1 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.149 BaseBdev2_malloc 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.149 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.410 [2024-11-28 02:28:15.832916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:42.410 [2024-11-28 02:28:15.833065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.410 [2024-11-28 02:28:15.833111] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:42.410 [2024-11-28 02:28:15.833152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.410 [2024-11-28 02:28:15.835192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.410 [2024-11-28 02:28:15.835301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:42.410 BaseBdev2 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.410 BaseBdev3_malloc 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.410 [2024-11-28 02:28:15.919215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:42.410 [2024-11-28 02:28:15.919355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.410 [2024-11-28 02:28:15.919383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:42.410 [2024-11-28 02:28:15.919397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.410 [2024-11-28 02:28:15.921513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.410 [2024-11-28 02:28:15.921563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:42.410 BaseBdev3 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.410 BaseBdev4_malloc 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.410 [2024-11-28 02:28:15.973512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:42.410 [2024-11-28 02:28:15.973639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.410 [2024-11-28 02:28:15.973683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:42.410 [2024-11-28 02:28:15.973732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.410 [2024-11-28 02:28:15.975853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.410 [2024-11-28 02:28:15.975948] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:42.410 BaseBdev4 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.410 02:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.410 spare_malloc 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.410 spare_delay 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.410 [2024-11-28 02:28:16.040513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:42.410 [2024-11-28 02:28:16.040633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.410 [2024-11-28 02:28:16.040674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:42.410 [2024-11-28 02:28:16.040711] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.410 [2024-11-28 02:28:16.042730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.410 [2024-11-28 02:28:16.042827] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:42.410 spare 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.410 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.410 [2024-11-28 02:28:16.052554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.410 [2024-11-28 02:28:16.054337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.410 [2024-11-28 02:28:16.054448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:42.410 [2024-11-28 02:28:16.054545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:42.410 [2024-11-28 02:28:16.054665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:42.410 [2024-11-28 02:28:16.054718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:42.410 [2024-11-28 02:28:16.055004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:42.411 [2024-11-28 02:28:16.055220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:42.411 [2024-11-28 02:28:16.055273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:42.411 [2024-11-28 02:28:16.055469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.411 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.671 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.671 "name": "raid_bdev1", 00:12:42.671 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:42.671 "strip_size_kb": 0, 00:12:42.671 "state": "online", 00:12:42.671 "raid_level": "raid1", 00:12:42.671 "superblock": false, 00:12:42.671 "num_base_bdevs": 4, 00:12:42.671 "num_base_bdevs_discovered": 4, 00:12:42.671 "num_base_bdevs_operational": 4, 00:12:42.671 "base_bdevs_list": [ 00:12:42.671 { 00:12:42.671 "name": "BaseBdev1", 00:12:42.671 "uuid": "6f787a22-4572-5d07-941e-8710f2404b34", 00:12:42.671 "is_configured": true, 00:12:42.671 "data_offset": 0, 00:12:42.671 "data_size": 65536 00:12:42.671 }, 00:12:42.671 { 00:12:42.671 "name": "BaseBdev2", 00:12:42.671 "uuid": "8deaff97-af99-5bd0-8c18-4f665bcd1701", 00:12:42.671 "is_configured": true, 00:12:42.671 "data_offset": 0, 00:12:42.671 "data_size": 65536 00:12:42.671 }, 00:12:42.671 { 00:12:42.671 "name": "BaseBdev3", 00:12:42.671 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:42.671 "is_configured": true, 00:12:42.671 "data_offset": 0, 00:12:42.671 "data_size": 65536 00:12:42.671 }, 00:12:42.671 { 00:12:42.671 "name": "BaseBdev4", 00:12:42.671 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:42.671 "is_configured": true, 00:12:42.671 "data_offset": 0, 00:12:42.671 "data_size": 65536 00:12:42.671 } 00:12:42.671 ] 00:12:42.671 }' 00:12:42.671 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.671 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.931 [2024-11-28 02:28:16.528144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:42.931 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:43.191 [2024-11-28 02:28:16.779472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:43.191 /dev/nbd0 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.191 1+0 records in 00:12:43.191 1+0 records out 00:12:43.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523462 s, 7.8 MB/s 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:43.191 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.192 02:28:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:43.192 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:43.192 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:43.192 02:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:49.769 65536+0 records in 00:12:49.769 65536+0 records out 00:12:49.769 33554432 bytes (34 MB, 32 MiB) copied, 5.54902 s, 6.0 MB/s 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:49.769 [2024-11-28 02:28:22.589786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.769 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.770 [2024-11-28 02:28:22.622783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.770 "name": "raid_bdev1", 00:12:49.770 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:49.770 "strip_size_kb": 0, 00:12:49.770 "state": "online", 00:12:49.770 "raid_level": "raid1", 00:12:49.770 "superblock": false, 00:12:49.770 "num_base_bdevs": 4, 00:12:49.770 "num_base_bdevs_discovered": 3, 00:12:49.770 "num_base_bdevs_operational": 3, 00:12:49.770 "base_bdevs_list": [ 00:12:49.770 { 00:12:49.770 "name": null, 00:12:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.770 "is_configured": false, 00:12:49.770 "data_offset": 0, 00:12:49.770 "data_size": 65536 00:12:49.770 }, 00:12:49.770 { 00:12:49.770 "name": "BaseBdev2", 00:12:49.770 "uuid": "8deaff97-af99-5bd0-8c18-4f665bcd1701", 00:12:49.770 "is_configured": true, 00:12:49.770 "data_offset": 0, 00:12:49.770 "data_size": 65536 00:12:49.770 }, 00:12:49.770 { 00:12:49.770 "name": "BaseBdev3", 00:12:49.770 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:49.770 "is_configured": true, 00:12:49.770 "data_offset": 0, 00:12:49.770 "data_size": 65536 00:12:49.770 }, 00:12:49.770 { 00:12:49.770 "name": "BaseBdev4", 00:12:49.770 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:49.770 "is_configured": true, 00:12:49.770 "data_offset": 0, 00:12:49.770 "data_size": 65536 00:12:49.770 } 00:12:49.770 ] 00:12:49.770 }' 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.770 02:28:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.770 02:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:49.770 02:28:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.770 02:28:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.770 [2024-11-28 02:28:23.070077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.770 [2024-11-28 02:28:23.084804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:12:49.770 02:28:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.770 02:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:49.770 [2024-11-28 02:28:23.086658] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.710 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.710 "name": "raid_bdev1", 00:12:50.710 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:50.710 "strip_size_kb": 0, 00:12:50.710 "state": "online", 00:12:50.710 "raid_level": "raid1", 00:12:50.710 "superblock": false, 00:12:50.710 "num_base_bdevs": 4, 00:12:50.710 "num_base_bdevs_discovered": 4, 00:12:50.710 "num_base_bdevs_operational": 4, 00:12:50.710 "process": { 00:12:50.710 "type": "rebuild", 00:12:50.710 "target": "spare", 00:12:50.710 "progress": { 00:12:50.710 "blocks": 20480, 00:12:50.710 "percent": 31 00:12:50.710 } 00:12:50.710 }, 00:12:50.710 "base_bdevs_list": [ 00:12:50.710 { 00:12:50.710 "name": "spare", 00:12:50.710 "uuid": "c8f31318-ed2d-5518-901f-f7ffc57c6d24", 00:12:50.710 "is_configured": true, 00:12:50.710 "data_offset": 0, 00:12:50.710 "data_size": 65536 00:12:50.710 }, 00:12:50.710 { 00:12:50.710 "name": "BaseBdev2", 00:12:50.710 "uuid": "8deaff97-af99-5bd0-8c18-4f665bcd1701", 00:12:50.710 "is_configured": true, 00:12:50.710 "data_offset": 0, 00:12:50.710 "data_size": 65536 00:12:50.710 }, 00:12:50.710 { 00:12:50.710 "name": "BaseBdev3", 00:12:50.710 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:50.710 "is_configured": true, 00:12:50.710 "data_offset": 0, 00:12:50.710 "data_size": 65536 00:12:50.710 }, 00:12:50.710 { 00:12:50.710 "name": "BaseBdev4", 00:12:50.710 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:50.711 "is_configured": true, 00:12:50.711 "data_offset": 0, 00:12:50.711 "data_size": 65536 00:12:50.711 } 00:12:50.711 ] 00:12:50.711 }' 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.711 [2024-11-28 02:28:24.222319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.711 [2024-11-28 02:28:24.291823] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:50.711 [2024-11-28 02:28:24.291971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.711 [2024-11-28 02:28:24.292015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.711 [2024-11-28 02:28:24.292061] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.711 "name": "raid_bdev1", 00:12:50.711 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:50.711 "strip_size_kb": 0, 00:12:50.711 "state": "online", 00:12:50.711 "raid_level": "raid1", 00:12:50.711 "superblock": false, 00:12:50.711 "num_base_bdevs": 4, 00:12:50.711 "num_base_bdevs_discovered": 3, 00:12:50.711 "num_base_bdevs_operational": 3, 00:12:50.711 "base_bdevs_list": [ 00:12:50.711 { 00:12:50.711 "name": null, 00:12:50.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.711 "is_configured": false, 00:12:50.711 "data_offset": 0, 00:12:50.711 "data_size": 65536 00:12:50.711 }, 00:12:50.711 { 00:12:50.711 "name": "BaseBdev2", 00:12:50.711 "uuid": "8deaff97-af99-5bd0-8c18-4f665bcd1701", 00:12:50.711 "is_configured": true, 00:12:50.711 "data_offset": 0, 00:12:50.711 "data_size": 65536 00:12:50.711 }, 00:12:50.711 { 00:12:50.711 "name": "BaseBdev3", 00:12:50.711 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:50.711 "is_configured": true, 00:12:50.711 "data_offset": 0, 00:12:50.711 "data_size": 65536 00:12:50.711 }, 00:12:50.711 { 00:12:50.711 "name": "BaseBdev4", 00:12:50.711 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:50.711 "is_configured": true, 00:12:50.711 "data_offset": 0, 00:12:50.711 "data_size": 65536 00:12:50.711 } 00:12:50.711 ] 00:12:50.711 }' 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.711 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.280 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.280 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.280 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.280 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.280 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.280 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.280 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.280 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.281 "name": "raid_bdev1", 00:12:51.281 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:51.281 "strip_size_kb": 0, 00:12:51.281 "state": "online", 00:12:51.281 "raid_level": "raid1", 00:12:51.281 "superblock": false, 00:12:51.281 "num_base_bdevs": 4, 00:12:51.281 "num_base_bdevs_discovered": 3, 00:12:51.281 "num_base_bdevs_operational": 3, 00:12:51.281 "base_bdevs_list": [ 00:12:51.281 { 00:12:51.281 "name": null, 00:12:51.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.281 "is_configured": false, 00:12:51.281 "data_offset": 0, 00:12:51.281 "data_size": 65536 00:12:51.281 }, 00:12:51.281 { 00:12:51.281 "name": "BaseBdev2", 00:12:51.281 "uuid": "8deaff97-af99-5bd0-8c18-4f665bcd1701", 00:12:51.281 "is_configured": true, 00:12:51.281 "data_offset": 0, 00:12:51.281 "data_size": 65536 00:12:51.281 }, 00:12:51.281 { 00:12:51.281 "name": "BaseBdev3", 00:12:51.281 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:51.281 "is_configured": true, 00:12:51.281 "data_offset": 0, 00:12:51.281 "data_size": 65536 00:12:51.281 }, 00:12:51.281 { 00:12:51.281 "name": "BaseBdev4", 00:12:51.281 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:51.281 "is_configured": true, 00:12:51.281 "data_offset": 0, 00:12:51.281 "data_size": 65536 00:12:51.281 } 00:12:51.281 ] 00:12:51.281 }' 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.281 [2024-11-28 02:28:24.843886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.281 [2024-11-28 02:28:24.858527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.281 02:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:51.281 [2024-11-28 02:28:24.860406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.222 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.222 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.222 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.222 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.222 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.222 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.222 02:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.222 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.222 02:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.222 02:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.482 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.482 "name": "raid_bdev1", 00:12:52.482 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:52.482 "strip_size_kb": 0, 00:12:52.482 "state": "online", 00:12:52.482 "raid_level": "raid1", 00:12:52.482 "superblock": false, 00:12:52.482 "num_base_bdevs": 4, 00:12:52.482 "num_base_bdevs_discovered": 4, 00:12:52.482 "num_base_bdevs_operational": 4, 00:12:52.482 "process": { 00:12:52.482 "type": "rebuild", 00:12:52.482 "target": "spare", 00:12:52.482 "progress": { 00:12:52.482 "blocks": 20480, 00:12:52.482 "percent": 31 00:12:52.482 } 00:12:52.482 }, 00:12:52.482 "base_bdevs_list": [ 00:12:52.482 { 00:12:52.482 "name": "spare", 00:12:52.482 "uuid": "c8f31318-ed2d-5518-901f-f7ffc57c6d24", 00:12:52.482 "is_configured": true, 00:12:52.482 "data_offset": 0, 00:12:52.482 "data_size": 65536 00:12:52.482 }, 00:12:52.482 { 00:12:52.482 "name": "BaseBdev2", 00:12:52.482 "uuid": "8deaff97-af99-5bd0-8c18-4f665bcd1701", 00:12:52.482 "is_configured": true, 00:12:52.482 "data_offset": 0, 00:12:52.482 "data_size": 65536 00:12:52.482 }, 00:12:52.482 { 00:12:52.482 "name": "BaseBdev3", 00:12:52.482 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:52.482 "is_configured": true, 00:12:52.482 "data_offset": 0, 00:12:52.482 "data_size": 65536 00:12:52.482 }, 00:12:52.482 { 00:12:52.482 "name": "BaseBdev4", 00:12:52.482 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:52.482 "is_configured": true, 00:12:52.482 "data_offset": 0, 00:12:52.482 "data_size": 65536 00:12:52.482 } 00:12:52.482 ] 00:12:52.482 }' 00:12:52.482 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.482 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.482 02:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.482 [2024-11-28 02:28:26.027696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.482 [2024-11-28 02:28:26.065973] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.482 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.482 "name": "raid_bdev1", 00:12:52.482 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:52.482 "strip_size_kb": 0, 00:12:52.482 "state": "online", 00:12:52.482 "raid_level": "raid1", 00:12:52.482 "superblock": false, 00:12:52.482 "num_base_bdevs": 4, 00:12:52.482 "num_base_bdevs_discovered": 3, 00:12:52.482 "num_base_bdevs_operational": 3, 00:12:52.482 "process": { 00:12:52.482 "type": "rebuild", 00:12:52.482 "target": "spare", 00:12:52.482 "progress": { 00:12:52.482 "blocks": 24576, 00:12:52.482 "percent": 37 00:12:52.482 } 00:12:52.482 }, 00:12:52.482 "base_bdevs_list": [ 00:12:52.482 { 00:12:52.482 "name": "spare", 00:12:52.482 "uuid": "c8f31318-ed2d-5518-901f-f7ffc57c6d24", 00:12:52.482 "is_configured": true, 00:12:52.482 "data_offset": 0, 00:12:52.482 "data_size": 65536 00:12:52.482 }, 00:12:52.482 { 00:12:52.482 "name": null, 00:12:52.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.482 "is_configured": false, 00:12:52.482 "data_offset": 0, 00:12:52.482 "data_size": 65536 00:12:52.483 }, 00:12:52.483 { 00:12:52.483 "name": "BaseBdev3", 00:12:52.483 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:52.483 "is_configured": true, 00:12:52.483 "data_offset": 0, 00:12:52.483 "data_size": 65536 00:12:52.483 }, 00:12:52.483 { 00:12:52.483 "name": "BaseBdev4", 00:12:52.483 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:52.483 "is_configured": true, 00:12:52.483 "data_offset": 0, 00:12:52.483 "data_size": 65536 00:12:52.483 } 00:12:52.483 ] 00:12:52.483 }' 00:12:52.483 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=436 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.742 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.742 "name": "raid_bdev1", 00:12:52.742 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:52.742 "strip_size_kb": 0, 00:12:52.742 "state": "online", 00:12:52.742 "raid_level": "raid1", 00:12:52.742 "superblock": false, 00:12:52.742 "num_base_bdevs": 4, 00:12:52.742 "num_base_bdevs_discovered": 3, 00:12:52.742 "num_base_bdevs_operational": 3, 00:12:52.742 "process": { 00:12:52.742 "type": "rebuild", 00:12:52.742 "target": "spare", 00:12:52.742 "progress": { 00:12:52.742 "blocks": 26624, 00:12:52.742 "percent": 40 00:12:52.742 } 00:12:52.742 }, 00:12:52.742 "base_bdevs_list": [ 00:12:52.742 { 00:12:52.742 "name": "spare", 00:12:52.742 "uuid": "c8f31318-ed2d-5518-901f-f7ffc57c6d24", 00:12:52.742 "is_configured": true, 00:12:52.742 "data_offset": 0, 00:12:52.742 "data_size": 65536 00:12:52.742 }, 00:12:52.742 { 00:12:52.742 "name": null, 00:12:52.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.742 "is_configured": false, 00:12:52.742 "data_offset": 0, 00:12:52.742 "data_size": 65536 00:12:52.742 }, 00:12:52.742 { 00:12:52.742 "name": "BaseBdev3", 00:12:52.742 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:52.742 "is_configured": true, 00:12:52.742 "data_offset": 0, 00:12:52.742 "data_size": 65536 00:12:52.742 }, 00:12:52.742 { 00:12:52.742 "name": "BaseBdev4", 00:12:52.742 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:52.742 "is_configured": true, 00:12:52.742 "data_offset": 0, 00:12:52.742 "data_size": 65536 00:12:52.742 } 00:12:52.742 ] 00:12:52.743 }' 00:12:52.743 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.743 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.743 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.743 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.743 02:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.681 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.681 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.681 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.681 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.681 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.681 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.681 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.681 02:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.681 02:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.681 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.939 02:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.939 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.939 "name": "raid_bdev1", 00:12:53.939 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:53.939 "strip_size_kb": 0, 00:12:53.939 "state": "online", 00:12:53.939 "raid_level": "raid1", 00:12:53.939 "superblock": false, 00:12:53.939 "num_base_bdevs": 4, 00:12:53.939 "num_base_bdevs_discovered": 3, 00:12:53.939 "num_base_bdevs_operational": 3, 00:12:53.939 "process": { 00:12:53.939 "type": "rebuild", 00:12:53.939 "target": "spare", 00:12:53.939 "progress": { 00:12:53.939 "blocks": 49152, 00:12:53.939 "percent": 75 00:12:53.939 } 00:12:53.939 }, 00:12:53.939 "base_bdevs_list": [ 00:12:53.939 { 00:12:53.939 "name": "spare", 00:12:53.939 "uuid": "c8f31318-ed2d-5518-901f-f7ffc57c6d24", 00:12:53.939 "is_configured": true, 00:12:53.939 "data_offset": 0, 00:12:53.939 "data_size": 65536 00:12:53.939 }, 00:12:53.939 { 00:12:53.939 "name": null, 00:12:53.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.939 "is_configured": false, 00:12:53.939 "data_offset": 0, 00:12:53.939 "data_size": 65536 00:12:53.939 }, 00:12:53.939 { 00:12:53.939 "name": "BaseBdev3", 00:12:53.939 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:53.939 "is_configured": true, 00:12:53.939 "data_offset": 0, 00:12:53.939 "data_size": 65536 00:12:53.939 }, 00:12:53.939 { 00:12:53.939 "name": "BaseBdev4", 00:12:53.939 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:53.939 "is_configured": true, 00:12:53.939 "data_offset": 0, 00:12:53.939 "data_size": 65536 00:12:53.939 } 00:12:53.939 ] 00:12:53.939 }' 00:12:53.939 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.939 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.939 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.939 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.939 02:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.508 [2024-11-28 02:28:28.074911] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:54.508 [2024-11-28 02:28:28.075119] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:54.508 [2024-11-28 02:28:28.075201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.076 "name": "raid_bdev1", 00:12:55.076 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:55.076 "strip_size_kb": 0, 00:12:55.076 "state": "online", 00:12:55.076 "raid_level": "raid1", 00:12:55.076 "superblock": false, 00:12:55.076 "num_base_bdevs": 4, 00:12:55.076 "num_base_bdevs_discovered": 3, 00:12:55.076 "num_base_bdevs_operational": 3, 00:12:55.076 "base_bdevs_list": [ 00:12:55.076 { 00:12:55.076 "name": "spare", 00:12:55.076 "uuid": "c8f31318-ed2d-5518-901f-f7ffc57c6d24", 00:12:55.076 "is_configured": true, 00:12:55.076 "data_offset": 0, 00:12:55.076 "data_size": 65536 00:12:55.076 }, 00:12:55.076 { 00:12:55.076 "name": null, 00:12:55.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.076 "is_configured": false, 00:12:55.076 "data_offset": 0, 00:12:55.076 "data_size": 65536 00:12:55.076 }, 00:12:55.076 { 00:12:55.076 "name": "BaseBdev3", 00:12:55.076 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:55.076 "is_configured": true, 00:12:55.076 "data_offset": 0, 00:12:55.076 "data_size": 65536 00:12:55.076 }, 00:12:55.076 { 00:12:55.076 "name": "BaseBdev4", 00:12:55.076 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:55.076 "is_configured": true, 00:12:55.076 "data_offset": 0, 00:12:55.076 "data_size": 65536 00:12:55.076 } 00:12:55.076 ] 00:12:55.076 }' 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.076 "name": "raid_bdev1", 00:12:55.076 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:55.076 "strip_size_kb": 0, 00:12:55.076 "state": "online", 00:12:55.076 "raid_level": "raid1", 00:12:55.076 "superblock": false, 00:12:55.076 "num_base_bdevs": 4, 00:12:55.076 "num_base_bdevs_discovered": 3, 00:12:55.076 "num_base_bdevs_operational": 3, 00:12:55.076 "base_bdevs_list": [ 00:12:55.076 { 00:12:55.076 "name": "spare", 00:12:55.076 "uuid": "c8f31318-ed2d-5518-901f-f7ffc57c6d24", 00:12:55.076 "is_configured": true, 00:12:55.076 "data_offset": 0, 00:12:55.076 "data_size": 65536 00:12:55.076 }, 00:12:55.076 { 00:12:55.076 "name": null, 00:12:55.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.076 "is_configured": false, 00:12:55.076 "data_offset": 0, 00:12:55.076 "data_size": 65536 00:12:55.076 }, 00:12:55.076 { 00:12:55.076 "name": "BaseBdev3", 00:12:55.076 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:55.076 "is_configured": true, 00:12:55.076 "data_offset": 0, 00:12:55.076 "data_size": 65536 00:12:55.076 }, 00:12:55.076 { 00:12:55.076 "name": "BaseBdev4", 00:12:55.076 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:55.076 "is_configured": true, 00:12:55.076 "data_offset": 0, 00:12:55.076 "data_size": 65536 00:12:55.076 } 00:12:55.076 ] 00:12:55.076 }' 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:55.076 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.336 "name": "raid_bdev1", 00:12:55.336 "uuid": "d3463b14-8df7-4a53-b3c1-9294f4a74c45", 00:12:55.336 "strip_size_kb": 0, 00:12:55.336 "state": "online", 00:12:55.336 "raid_level": "raid1", 00:12:55.336 "superblock": false, 00:12:55.336 "num_base_bdevs": 4, 00:12:55.336 "num_base_bdevs_discovered": 3, 00:12:55.336 "num_base_bdevs_operational": 3, 00:12:55.336 "base_bdevs_list": [ 00:12:55.336 { 00:12:55.336 "name": "spare", 00:12:55.336 "uuid": "c8f31318-ed2d-5518-901f-f7ffc57c6d24", 00:12:55.336 "is_configured": true, 00:12:55.336 "data_offset": 0, 00:12:55.336 "data_size": 65536 00:12:55.336 }, 00:12:55.336 { 00:12:55.336 "name": null, 00:12:55.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.336 "is_configured": false, 00:12:55.336 "data_offset": 0, 00:12:55.336 "data_size": 65536 00:12:55.336 }, 00:12:55.336 { 00:12:55.336 "name": "BaseBdev3", 00:12:55.336 "uuid": "30406431-06fd-5dc9-b70a-30d570c5cc1b", 00:12:55.336 "is_configured": true, 00:12:55.336 "data_offset": 0, 00:12:55.336 "data_size": 65536 00:12:55.336 }, 00:12:55.336 { 00:12:55.336 "name": "BaseBdev4", 00:12:55.336 "uuid": "fad86f33-9a06-5e66-9b71-067fdf262f2a", 00:12:55.336 "is_configured": true, 00:12:55.336 "data_offset": 0, 00:12:55.336 "data_size": 65536 00:12:55.336 } 00:12:55.336 ] 00:12:55.336 }' 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.336 02:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.595 02:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.595 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.595 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.595 [2024-11-28 02:28:29.218456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.595 [2024-11-28 02:28:29.218546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.595 [2024-11-28 02:28:29.218660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.595 [2024-11-28 02:28:29.218785] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.595 [2024-11-28 02:28:29.218845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:55.595 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.595 02:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.595 02:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:55.595 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.595 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.595 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:55.855 /dev/nbd0 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.855 1+0 records in 00:12:55.855 1+0 records out 00:12:55.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245005 s, 16.7 MB/s 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:55.855 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:56.114 /dev/nbd1 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.114 1+0 records in 00:12:56.114 1+0 records out 00:12:56.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042087 s, 9.7 MB/s 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:56.114 02:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:56.376 02:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:56.376 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.376 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:56.376 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:56.376 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:56.376 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.376 02:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:56.643 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:56.643 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:56.643 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:56.643 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.643 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.643 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:56.643 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:56.643 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.643 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.643 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77308 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77308 ']' 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77308 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77308 00:12:56.936 killing process with pid 77308 00:12:56.936 Received shutdown signal, test time was about 60.000000 seconds 00:12:56.936 00:12:56.936 Latency(us) 00:12:56.936 [2024-11-28T02:28:30.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.936 [2024-11-28T02:28:30.615Z] =================================================================================================================== 00:12:56.936 [2024-11-28T02:28:30.615Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77308' 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77308 00:12:56.936 [2024-11-28 02:28:30.410034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.936 02:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77308 00:12:57.505 [2024-11-28 02:28:30.887789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.443 02:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:58.443 00:12:58.443 real 0m17.181s 00:12:58.443 user 0m19.101s 00:12:58.443 sys 0m2.967s 00:12:58.443 02:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.443 02:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.443 ************************************ 00:12:58.443 END TEST raid_rebuild_test 00:12:58.443 ************************************ 00:12:58.443 02:28:32 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:58.443 02:28:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:58.443 02:28:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.443 02:28:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.443 ************************************ 00:12:58.443 START TEST raid_rebuild_test_sb 00:12:58.443 ************************************ 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77744 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77744 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77744 ']' 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.443 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.702 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:58.702 Zero copy mechanism will not be used. 00:12:58.702 [2024-11-28 02:28:32.154547] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:58.702 [2024-11-28 02:28:32.154673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77744 ] 00:12:58.702 [2024-11-28 02:28:32.329967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.960 [2024-11-28 02:28:32.443205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.960 [2024-11-28 02:28:32.636328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.960 [2024-11-28 02:28:32.636370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.527 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.527 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:59.527 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.527 02:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:59.527 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.527 02:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.527 BaseBdev1_malloc 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.527 [2024-11-28 02:28:33.023898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:59.527 [2024-11-28 02:28:33.023985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.527 [2024-11-28 02:28:33.024015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:59.527 [2024-11-28 02:28:33.024035] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.527 [2024-11-28 02:28:33.026049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.527 [2024-11-28 02:28:33.026091] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:59.527 BaseBdev1 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.527 BaseBdev2_malloc 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.527 [2024-11-28 02:28:33.072763] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:59.527 [2024-11-28 02:28:33.072835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.527 [2024-11-28 02:28:33.072868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:59.527 [2024-11-28 02:28:33.072882] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.527 [2024-11-28 02:28:33.074993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.527 [2024-11-28 02:28:33.075034] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:59.527 BaseBdev2 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.527 BaseBdev3_malloc 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.527 [2024-11-28 02:28:33.132611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:59.527 [2024-11-28 02:28:33.132677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.527 [2024-11-28 02:28:33.132720] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:59.527 [2024-11-28 02:28:33.132741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.527 [2024-11-28 02:28:33.135075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.527 [2024-11-28 02:28:33.135120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:59.527 BaseBdev3 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:59.527 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.528 BaseBdev4_malloc 00:12:59.528 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.528 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:59.528 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.528 [2024-11-28 02:28:33.185298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:59.528 [2024-11-28 02:28:33.185370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.528 [2024-11-28 02:28:33.185397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:59.528 [2024-11-28 02:28:33.185411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.528 [2024-11-28 02:28:33.187391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.528 [2024-11-28 02:28:33.187440] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:59.528 BaseBdev4 00:12:59.528 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.528 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:59.528 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.786 spare_malloc 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.786 spare_delay 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.786 [2024-11-28 02:28:33.248978] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:59.786 [2024-11-28 02:28:33.249037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.786 [2024-11-28 02:28:33.249057] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:59.786 [2024-11-28 02:28:33.249071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.786 [2024-11-28 02:28:33.251084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.786 [2024-11-28 02:28:33.251125] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:59.786 spare 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.786 [2024-11-28 02:28:33.261000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.786 [2024-11-28 02:28:33.262735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.786 [2024-11-28 02:28:33.262808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.786 [2024-11-28 02:28:33.262872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:59.786 [2024-11-28 02:28:33.263082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:59.786 [2024-11-28 02:28:33.263106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.786 [2024-11-28 02:28:33.263356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:59.786 [2024-11-28 02:28:33.263555] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:59.786 [2024-11-28 02:28:33.263573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:59.786 [2024-11-28 02:28:33.263730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.786 "name": "raid_bdev1", 00:12:59.786 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:12:59.786 "strip_size_kb": 0, 00:12:59.786 "state": "online", 00:12:59.786 "raid_level": "raid1", 00:12:59.786 "superblock": true, 00:12:59.786 "num_base_bdevs": 4, 00:12:59.786 "num_base_bdevs_discovered": 4, 00:12:59.786 "num_base_bdevs_operational": 4, 00:12:59.786 "base_bdevs_list": [ 00:12:59.786 { 00:12:59.786 "name": "BaseBdev1", 00:12:59.786 "uuid": "45701c3d-118d-5d94-8687-440ec3d9eea5", 00:12:59.786 "is_configured": true, 00:12:59.786 "data_offset": 2048, 00:12:59.786 "data_size": 63488 00:12:59.786 }, 00:12:59.786 { 00:12:59.786 "name": "BaseBdev2", 00:12:59.786 "uuid": "13ef000d-9bb7-563b-b270-23a54fdbd51f", 00:12:59.786 "is_configured": true, 00:12:59.786 "data_offset": 2048, 00:12:59.786 "data_size": 63488 00:12:59.786 }, 00:12:59.786 { 00:12:59.786 "name": "BaseBdev3", 00:12:59.786 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:12:59.786 "is_configured": true, 00:12:59.786 "data_offset": 2048, 00:12:59.786 "data_size": 63488 00:12:59.786 }, 00:12:59.786 { 00:12:59.786 "name": "BaseBdev4", 00:12:59.786 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:12:59.786 "is_configured": true, 00:12:59.786 "data_offset": 2048, 00:12:59.786 "data_size": 63488 00:12:59.786 } 00:12:59.786 ] 00:12:59.786 }' 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.786 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.044 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.044 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.044 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.044 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:00.044 [2024-11-28 02:28:33.720590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.303 02:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:00.303 [2024-11-28 02:28:33.976150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:00.561 /dev/nbd0 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.561 1+0 records in 00:13:00.561 1+0 records out 00:13:00.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389084 s, 10.5 MB/s 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:00.561 02:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:07.134 63488+0 records in 00:13:07.134 63488+0 records out 00:13:07.134 32505856 bytes (33 MB, 31 MiB) copied, 5.58559 s, 5.8 MB/s 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.134 [2024-11-28 02:28:39.831096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.134 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.135 [2024-11-28 02:28:39.847459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.135 "name": "raid_bdev1", 00:13:07.135 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:07.135 "strip_size_kb": 0, 00:13:07.135 "state": "online", 00:13:07.135 "raid_level": "raid1", 00:13:07.135 "superblock": true, 00:13:07.135 "num_base_bdevs": 4, 00:13:07.135 "num_base_bdevs_discovered": 3, 00:13:07.135 "num_base_bdevs_operational": 3, 00:13:07.135 "base_bdevs_list": [ 00:13:07.135 { 00:13:07.135 "name": null, 00:13:07.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.135 "is_configured": false, 00:13:07.135 "data_offset": 0, 00:13:07.135 "data_size": 63488 00:13:07.135 }, 00:13:07.135 { 00:13:07.135 "name": "BaseBdev2", 00:13:07.135 "uuid": "13ef000d-9bb7-563b-b270-23a54fdbd51f", 00:13:07.135 "is_configured": true, 00:13:07.135 "data_offset": 2048, 00:13:07.135 "data_size": 63488 00:13:07.135 }, 00:13:07.135 { 00:13:07.135 "name": "BaseBdev3", 00:13:07.135 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:07.135 "is_configured": true, 00:13:07.135 "data_offset": 2048, 00:13:07.135 "data_size": 63488 00:13:07.135 }, 00:13:07.135 { 00:13:07.135 "name": "BaseBdev4", 00:13:07.135 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:07.135 "is_configured": true, 00:13:07.135 "data_offset": 2048, 00:13:07.135 "data_size": 63488 00:13:07.135 } 00:13:07.135 ] 00:13:07.135 }' 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.135 02:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.135 02:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.135 02:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.135 02:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.135 [2024-11-28 02:28:40.306687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.135 [2024-11-28 02:28:40.322418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:07.135 02:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.135 [2024-11-28 02:28:40.324392] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.135 02:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:07.702 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.702 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.702 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.702 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.702 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.702 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.702 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.702 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.702 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.702 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.961 "name": "raid_bdev1", 00:13:07.961 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:07.961 "strip_size_kb": 0, 00:13:07.961 "state": "online", 00:13:07.961 "raid_level": "raid1", 00:13:07.961 "superblock": true, 00:13:07.961 "num_base_bdevs": 4, 00:13:07.961 "num_base_bdevs_discovered": 4, 00:13:07.961 "num_base_bdevs_operational": 4, 00:13:07.961 "process": { 00:13:07.961 "type": "rebuild", 00:13:07.961 "target": "spare", 00:13:07.961 "progress": { 00:13:07.961 "blocks": 20480, 00:13:07.961 "percent": 32 00:13:07.961 } 00:13:07.961 }, 00:13:07.961 "base_bdevs_list": [ 00:13:07.961 { 00:13:07.961 "name": "spare", 00:13:07.961 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:07.961 "is_configured": true, 00:13:07.961 "data_offset": 2048, 00:13:07.961 "data_size": 63488 00:13:07.961 }, 00:13:07.961 { 00:13:07.961 "name": "BaseBdev2", 00:13:07.961 "uuid": "13ef000d-9bb7-563b-b270-23a54fdbd51f", 00:13:07.961 "is_configured": true, 00:13:07.961 "data_offset": 2048, 00:13:07.961 "data_size": 63488 00:13:07.961 }, 00:13:07.961 { 00:13:07.961 "name": "BaseBdev3", 00:13:07.961 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:07.961 "is_configured": true, 00:13:07.961 "data_offset": 2048, 00:13:07.961 "data_size": 63488 00:13:07.961 }, 00:13:07.961 { 00:13:07.961 "name": "BaseBdev4", 00:13:07.961 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:07.961 "is_configured": true, 00:13:07.961 "data_offset": 2048, 00:13:07.961 "data_size": 63488 00:13:07.961 } 00:13:07.961 ] 00:13:07.961 }' 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.961 [2024-11-28 02:28:41.487730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.961 [2024-11-28 02:28:41.529682] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:07.961 [2024-11-28 02:28:41.529836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.961 [2024-11-28 02:28:41.529883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.961 [2024-11-28 02:28:41.529913] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.961 "name": "raid_bdev1", 00:13:07.961 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:07.961 "strip_size_kb": 0, 00:13:07.961 "state": "online", 00:13:07.961 "raid_level": "raid1", 00:13:07.961 "superblock": true, 00:13:07.961 "num_base_bdevs": 4, 00:13:07.961 "num_base_bdevs_discovered": 3, 00:13:07.961 "num_base_bdevs_operational": 3, 00:13:07.961 "base_bdevs_list": [ 00:13:07.961 { 00:13:07.961 "name": null, 00:13:07.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.961 "is_configured": false, 00:13:07.961 "data_offset": 0, 00:13:07.961 "data_size": 63488 00:13:07.961 }, 00:13:07.961 { 00:13:07.961 "name": "BaseBdev2", 00:13:07.961 "uuid": "13ef000d-9bb7-563b-b270-23a54fdbd51f", 00:13:07.961 "is_configured": true, 00:13:07.961 "data_offset": 2048, 00:13:07.961 "data_size": 63488 00:13:07.961 }, 00:13:07.961 { 00:13:07.961 "name": "BaseBdev3", 00:13:07.961 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:07.961 "is_configured": true, 00:13:07.961 "data_offset": 2048, 00:13:07.961 "data_size": 63488 00:13:07.961 }, 00:13:07.961 { 00:13:07.961 "name": "BaseBdev4", 00:13:07.961 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:07.961 "is_configured": true, 00:13:07.961 "data_offset": 2048, 00:13:07.961 "data_size": 63488 00:13:07.961 } 00:13:07.961 ] 00:13:07.961 }' 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.961 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.528 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.528 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.528 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.528 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.528 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.528 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.528 02:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.528 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.528 02:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.528 02:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.528 02:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.528 "name": "raid_bdev1", 00:13:08.528 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:08.528 "strip_size_kb": 0, 00:13:08.528 "state": "online", 00:13:08.528 "raid_level": "raid1", 00:13:08.528 "superblock": true, 00:13:08.528 "num_base_bdevs": 4, 00:13:08.528 "num_base_bdevs_discovered": 3, 00:13:08.528 "num_base_bdevs_operational": 3, 00:13:08.528 "base_bdevs_list": [ 00:13:08.528 { 00:13:08.528 "name": null, 00:13:08.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.528 "is_configured": false, 00:13:08.528 "data_offset": 0, 00:13:08.528 "data_size": 63488 00:13:08.528 }, 00:13:08.528 { 00:13:08.528 "name": "BaseBdev2", 00:13:08.528 "uuid": "13ef000d-9bb7-563b-b270-23a54fdbd51f", 00:13:08.528 "is_configured": true, 00:13:08.528 "data_offset": 2048, 00:13:08.528 "data_size": 63488 00:13:08.528 }, 00:13:08.528 { 00:13:08.528 "name": "BaseBdev3", 00:13:08.528 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:08.528 "is_configured": true, 00:13:08.528 "data_offset": 2048, 00:13:08.528 "data_size": 63488 00:13:08.528 }, 00:13:08.528 { 00:13:08.528 "name": "BaseBdev4", 00:13:08.528 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:08.528 "is_configured": true, 00:13:08.528 "data_offset": 2048, 00:13:08.528 "data_size": 63488 00:13:08.528 } 00:13:08.528 ] 00:13:08.528 }' 00:13:08.528 02:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.528 02:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.528 02:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.528 02:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.528 02:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:08.528 02:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.528 02:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.528 [2024-11-28 02:28:42.135076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:08.528 [2024-11-28 02:28:42.149428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:08.529 02:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.529 02:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:08.529 [2024-11-28 02:28:42.151331] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.908 "name": "raid_bdev1", 00:13:09.908 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:09.908 "strip_size_kb": 0, 00:13:09.908 "state": "online", 00:13:09.908 "raid_level": "raid1", 00:13:09.908 "superblock": true, 00:13:09.908 "num_base_bdevs": 4, 00:13:09.908 "num_base_bdevs_discovered": 4, 00:13:09.908 "num_base_bdevs_operational": 4, 00:13:09.908 "process": { 00:13:09.908 "type": "rebuild", 00:13:09.908 "target": "spare", 00:13:09.908 "progress": { 00:13:09.908 "blocks": 20480, 00:13:09.908 "percent": 32 00:13:09.908 } 00:13:09.908 }, 00:13:09.908 "base_bdevs_list": [ 00:13:09.908 { 00:13:09.908 "name": "spare", 00:13:09.908 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:09.908 "is_configured": true, 00:13:09.908 "data_offset": 2048, 00:13:09.908 "data_size": 63488 00:13:09.908 }, 00:13:09.908 { 00:13:09.908 "name": "BaseBdev2", 00:13:09.908 "uuid": "13ef000d-9bb7-563b-b270-23a54fdbd51f", 00:13:09.908 "is_configured": true, 00:13:09.908 "data_offset": 2048, 00:13:09.908 "data_size": 63488 00:13:09.908 }, 00:13:09.908 { 00:13:09.908 "name": "BaseBdev3", 00:13:09.908 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:09.908 "is_configured": true, 00:13:09.908 "data_offset": 2048, 00:13:09.908 "data_size": 63488 00:13:09.908 }, 00:13:09.908 { 00:13:09.908 "name": "BaseBdev4", 00:13:09.908 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:09.908 "is_configured": true, 00:13:09.908 "data_offset": 2048, 00:13:09.908 "data_size": 63488 00:13:09.908 } 00:13:09.908 ] 00:13:09.908 }' 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:09.908 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.908 [2024-11-28 02:28:43.307250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:09.908 [2024-11-28 02:28:43.456749] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.908 "name": "raid_bdev1", 00:13:09.908 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:09.908 "strip_size_kb": 0, 00:13:09.908 "state": "online", 00:13:09.908 "raid_level": "raid1", 00:13:09.908 "superblock": true, 00:13:09.908 "num_base_bdevs": 4, 00:13:09.908 "num_base_bdevs_discovered": 3, 00:13:09.908 "num_base_bdevs_operational": 3, 00:13:09.908 "process": { 00:13:09.908 "type": "rebuild", 00:13:09.908 "target": "spare", 00:13:09.908 "progress": { 00:13:09.908 "blocks": 24576, 00:13:09.908 "percent": 38 00:13:09.908 } 00:13:09.908 }, 00:13:09.908 "base_bdevs_list": [ 00:13:09.908 { 00:13:09.908 "name": "spare", 00:13:09.908 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:09.908 "is_configured": true, 00:13:09.908 "data_offset": 2048, 00:13:09.908 "data_size": 63488 00:13:09.908 }, 00:13:09.908 { 00:13:09.908 "name": null, 00:13:09.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.908 "is_configured": false, 00:13:09.908 "data_offset": 0, 00:13:09.908 "data_size": 63488 00:13:09.908 }, 00:13:09.908 { 00:13:09.908 "name": "BaseBdev3", 00:13:09.908 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:09.908 "is_configured": true, 00:13:09.908 "data_offset": 2048, 00:13:09.908 "data_size": 63488 00:13:09.908 }, 00:13:09.908 { 00:13:09.908 "name": "BaseBdev4", 00:13:09.908 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:09.908 "is_configured": true, 00:13:09.908 "data_offset": 2048, 00:13:09.908 "data_size": 63488 00:13:09.908 } 00:13:09.908 ] 00:13:09.908 }' 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.908 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.168 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.168 "name": "raid_bdev1", 00:13:10.168 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:10.168 "strip_size_kb": 0, 00:13:10.168 "state": "online", 00:13:10.168 "raid_level": "raid1", 00:13:10.168 "superblock": true, 00:13:10.168 "num_base_bdevs": 4, 00:13:10.169 "num_base_bdevs_discovered": 3, 00:13:10.169 "num_base_bdevs_operational": 3, 00:13:10.169 "process": { 00:13:10.169 "type": "rebuild", 00:13:10.169 "target": "spare", 00:13:10.169 "progress": { 00:13:10.169 "blocks": 26624, 00:13:10.169 "percent": 41 00:13:10.169 } 00:13:10.169 }, 00:13:10.169 "base_bdevs_list": [ 00:13:10.169 { 00:13:10.169 "name": "spare", 00:13:10.169 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:10.169 "is_configured": true, 00:13:10.169 "data_offset": 2048, 00:13:10.169 "data_size": 63488 00:13:10.169 }, 00:13:10.169 { 00:13:10.169 "name": null, 00:13:10.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.169 "is_configured": false, 00:13:10.169 "data_offset": 0, 00:13:10.169 "data_size": 63488 00:13:10.169 }, 00:13:10.169 { 00:13:10.169 "name": "BaseBdev3", 00:13:10.169 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:10.169 "is_configured": true, 00:13:10.169 "data_offset": 2048, 00:13:10.169 "data_size": 63488 00:13:10.169 }, 00:13:10.169 { 00:13:10.169 "name": "BaseBdev4", 00:13:10.169 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:10.169 "is_configured": true, 00:13:10.169 "data_offset": 2048, 00:13:10.169 "data_size": 63488 00:13:10.169 } 00:13:10.169 ] 00:13:10.169 }' 00:13:10.169 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.169 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.169 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.169 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.169 02:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:11.107 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:11.107 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.107 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.107 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.107 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.107 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.107 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.107 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.107 02:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.107 02:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.366 02:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.366 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.366 "name": "raid_bdev1", 00:13:11.366 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:11.366 "strip_size_kb": 0, 00:13:11.366 "state": "online", 00:13:11.366 "raid_level": "raid1", 00:13:11.366 "superblock": true, 00:13:11.366 "num_base_bdevs": 4, 00:13:11.366 "num_base_bdevs_discovered": 3, 00:13:11.366 "num_base_bdevs_operational": 3, 00:13:11.366 "process": { 00:13:11.366 "type": "rebuild", 00:13:11.366 "target": "spare", 00:13:11.366 "progress": { 00:13:11.366 "blocks": 51200, 00:13:11.366 "percent": 80 00:13:11.366 } 00:13:11.366 }, 00:13:11.366 "base_bdevs_list": [ 00:13:11.366 { 00:13:11.366 "name": "spare", 00:13:11.366 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:11.366 "is_configured": true, 00:13:11.366 "data_offset": 2048, 00:13:11.366 "data_size": 63488 00:13:11.366 }, 00:13:11.366 { 00:13:11.366 "name": null, 00:13:11.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.366 "is_configured": false, 00:13:11.366 "data_offset": 0, 00:13:11.366 "data_size": 63488 00:13:11.366 }, 00:13:11.366 { 00:13:11.366 "name": "BaseBdev3", 00:13:11.366 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:11.367 "is_configured": true, 00:13:11.367 "data_offset": 2048, 00:13:11.367 "data_size": 63488 00:13:11.367 }, 00:13:11.367 { 00:13:11.367 "name": "BaseBdev4", 00:13:11.367 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:11.367 "is_configured": true, 00:13:11.367 "data_offset": 2048, 00:13:11.367 "data_size": 63488 00:13:11.367 } 00:13:11.367 ] 00:13:11.367 }' 00:13:11.367 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.367 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.367 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.367 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.367 02:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:11.946 [2024-11-28 02:28:45.372262] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:11.946 [2024-11-28 02:28:45.372484] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:11.946 [2024-11-28 02:28:45.372693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.514 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.514 "name": "raid_bdev1", 00:13:12.514 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:12.514 "strip_size_kb": 0, 00:13:12.514 "state": "online", 00:13:12.514 "raid_level": "raid1", 00:13:12.514 "superblock": true, 00:13:12.514 "num_base_bdevs": 4, 00:13:12.514 "num_base_bdevs_discovered": 3, 00:13:12.514 "num_base_bdevs_operational": 3, 00:13:12.514 "base_bdevs_list": [ 00:13:12.514 { 00:13:12.514 "name": "spare", 00:13:12.514 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:12.514 "is_configured": true, 00:13:12.514 "data_offset": 2048, 00:13:12.514 "data_size": 63488 00:13:12.514 }, 00:13:12.514 { 00:13:12.514 "name": null, 00:13:12.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.514 "is_configured": false, 00:13:12.514 "data_offset": 0, 00:13:12.514 "data_size": 63488 00:13:12.515 }, 00:13:12.515 { 00:13:12.515 "name": "BaseBdev3", 00:13:12.515 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:12.515 "is_configured": true, 00:13:12.515 "data_offset": 2048, 00:13:12.515 "data_size": 63488 00:13:12.515 }, 00:13:12.515 { 00:13:12.515 "name": "BaseBdev4", 00:13:12.515 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:12.515 "is_configured": true, 00:13:12.515 "data_offset": 2048, 00:13:12.515 "data_size": 63488 00:13:12.515 } 00:13:12.515 ] 00:13:12.515 }' 00:13:12.515 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.515 02:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.515 "name": "raid_bdev1", 00:13:12.515 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:12.515 "strip_size_kb": 0, 00:13:12.515 "state": "online", 00:13:12.515 "raid_level": "raid1", 00:13:12.515 "superblock": true, 00:13:12.515 "num_base_bdevs": 4, 00:13:12.515 "num_base_bdevs_discovered": 3, 00:13:12.515 "num_base_bdevs_operational": 3, 00:13:12.515 "base_bdevs_list": [ 00:13:12.515 { 00:13:12.515 "name": "spare", 00:13:12.515 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:12.515 "is_configured": true, 00:13:12.515 "data_offset": 2048, 00:13:12.515 "data_size": 63488 00:13:12.515 }, 00:13:12.515 { 00:13:12.515 "name": null, 00:13:12.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.515 "is_configured": false, 00:13:12.515 "data_offset": 0, 00:13:12.515 "data_size": 63488 00:13:12.515 }, 00:13:12.515 { 00:13:12.515 "name": "BaseBdev3", 00:13:12.515 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:12.515 "is_configured": true, 00:13:12.515 "data_offset": 2048, 00:13:12.515 "data_size": 63488 00:13:12.515 }, 00:13:12.515 { 00:13:12.515 "name": "BaseBdev4", 00:13:12.515 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:12.515 "is_configured": true, 00:13:12.515 "data_offset": 2048, 00:13:12.515 "data_size": 63488 00:13:12.515 } 00:13:12.515 ] 00:13:12.515 }' 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.515 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.774 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.774 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.774 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.774 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.774 "name": "raid_bdev1", 00:13:12.775 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:12.775 "strip_size_kb": 0, 00:13:12.775 "state": "online", 00:13:12.775 "raid_level": "raid1", 00:13:12.775 "superblock": true, 00:13:12.775 "num_base_bdevs": 4, 00:13:12.775 "num_base_bdevs_discovered": 3, 00:13:12.775 "num_base_bdevs_operational": 3, 00:13:12.775 "base_bdevs_list": [ 00:13:12.775 { 00:13:12.775 "name": "spare", 00:13:12.775 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:12.775 "is_configured": true, 00:13:12.775 "data_offset": 2048, 00:13:12.775 "data_size": 63488 00:13:12.775 }, 00:13:12.775 { 00:13:12.775 "name": null, 00:13:12.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.775 "is_configured": false, 00:13:12.775 "data_offset": 0, 00:13:12.775 "data_size": 63488 00:13:12.775 }, 00:13:12.775 { 00:13:12.775 "name": "BaseBdev3", 00:13:12.775 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:12.775 "is_configured": true, 00:13:12.775 "data_offset": 2048, 00:13:12.775 "data_size": 63488 00:13:12.775 }, 00:13:12.775 { 00:13:12.775 "name": "BaseBdev4", 00:13:12.775 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:12.775 "is_configured": true, 00:13:12.775 "data_offset": 2048, 00:13:12.775 "data_size": 63488 00:13:12.775 } 00:13:12.775 ] 00:13:12.775 }' 00:13:12.775 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.775 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.034 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.034 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.035 [2024-11-28 02:28:46.603373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.035 [2024-11-28 02:28:46.603502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.035 [2024-11-28 02:28:46.603663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.035 [2024-11-28 02:28:46.603799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.035 [2024-11-28 02:28:46.603850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.035 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:13.294 /dev/nbd0 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.294 1+0 records in 00:13:13.294 1+0 records out 00:13:13.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400935 s, 10.2 MB/s 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.294 02:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:13.553 /dev/nbd1 00:13:13.553 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:13.553 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:13.553 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:13.553 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:13.553 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.553 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.554 1+0 records in 00:13:13.554 1+0 records out 00:13:13.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449215 s, 9.1 MB/s 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.554 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:13.813 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:13.813 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.813 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:13.813 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:13.813 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:13.813 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.813 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:14.072 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:14.072 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:14.072 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:14.072 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.072 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.072 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:14.072 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:14.072 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.072 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.072 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 [2024-11-28 02:28:47.793415] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:14.331 [2024-11-28 02:28:47.793555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.331 [2024-11-28 02:28:47.793599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:14.331 [2024-11-28 02:28:47.793627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.331 [2024-11-28 02:28:47.796270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.331 [2024-11-28 02:28:47.796348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:14.331 [2024-11-28 02:28:47.796479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:14.331 [2024-11-28 02:28:47.796562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.331 [2024-11-28 02:28:47.796797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.331 [2024-11-28 02:28:47.796948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:14.331 spare 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 [2024-11-28 02:28:47.896910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:14.331 [2024-11-28 02:28:47.897059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:14.331 [2024-11-28 02:28:47.897516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:14.331 [2024-11-28 02:28:47.897821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:14.331 [2024-11-28 02:28:47.897869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:14.331 [2024-11-28 02:28:47.898161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.331 "name": "raid_bdev1", 00:13:14.331 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:14.331 "strip_size_kb": 0, 00:13:14.331 "state": "online", 00:13:14.331 "raid_level": "raid1", 00:13:14.331 "superblock": true, 00:13:14.331 "num_base_bdevs": 4, 00:13:14.331 "num_base_bdevs_discovered": 3, 00:13:14.331 "num_base_bdevs_operational": 3, 00:13:14.331 "base_bdevs_list": [ 00:13:14.331 { 00:13:14.331 "name": "spare", 00:13:14.331 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:14.331 "is_configured": true, 00:13:14.331 "data_offset": 2048, 00:13:14.331 "data_size": 63488 00:13:14.331 }, 00:13:14.331 { 00:13:14.331 "name": null, 00:13:14.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.331 "is_configured": false, 00:13:14.331 "data_offset": 2048, 00:13:14.331 "data_size": 63488 00:13:14.331 }, 00:13:14.331 { 00:13:14.331 "name": "BaseBdev3", 00:13:14.331 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:14.331 "is_configured": true, 00:13:14.331 "data_offset": 2048, 00:13:14.331 "data_size": 63488 00:13:14.331 }, 00:13:14.331 { 00:13:14.331 "name": "BaseBdev4", 00:13:14.331 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:14.331 "is_configured": true, 00:13:14.331 "data_offset": 2048, 00:13:14.331 "data_size": 63488 00:13:14.331 } 00:13:14.331 ] 00:13:14.331 }' 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.331 02:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.900 "name": "raid_bdev1", 00:13:14.900 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:14.900 "strip_size_kb": 0, 00:13:14.900 "state": "online", 00:13:14.900 "raid_level": "raid1", 00:13:14.900 "superblock": true, 00:13:14.900 "num_base_bdevs": 4, 00:13:14.900 "num_base_bdevs_discovered": 3, 00:13:14.900 "num_base_bdevs_operational": 3, 00:13:14.900 "base_bdevs_list": [ 00:13:14.900 { 00:13:14.900 "name": "spare", 00:13:14.900 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:14.900 "is_configured": true, 00:13:14.900 "data_offset": 2048, 00:13:14.900 "data_size": 63488 00:13:14.900 }, 00:13:14.900 { 00:13:14.900 "name": null, 00:13:14.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.900 "is_configured": false, 00:13:14.900 "data_offset": 2048, 00:13:14.900 "data_size": 63488 00:13:14.900 }, 00:13:14.900 { 00:13:14.900 "name": "BaseBdev3", 00:13:14.900 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:14.900 "is_configured": true, 00:13:14.900 "data_offset": 2048, 00:13:14.900 "data_size": 63488 00:13:14.900 }, 00:13:14.900 { 00:13:14.900 "name": "BaseBdev4", 00:13:14.900 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:14.900 "is_configured": true, 00:13:14.900 "data_offset": 2048, 00:13:14.900 "data_size": 63488 00:13:14.900 } 00:13:14.900 ] 00:13:14.900 }' 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.900 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.901 [2024-11-28 02:28:48.525118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.901 "name": "raid_bdev1", 00:13:14.901 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:14.901 "strip_size_kb": 0, 00:13:14.901 "state": "online", 00:13:14.901 "raid_level": "raid1", 00:13:14.901 "superblock": true, 00:13:14.901 "num_base_bdevs": 4, 00:13:14.901 "num_base_bdevs_discovered": 2, 00:13:14.901 "num_base_bdevs_operational": 2, 00:13:14.901 "base_bdevs_list": [ 00:13:14.901 { 00:13:14.901 "name": null, 00:13:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.901 "is_configured": false, 00:13:14.901 "data_offset": 0, 00:13:14.901 "data_size": 63488 00:13:14.901 }, 00:13:14.901 { 00:13:14.901 "name": null, 00:13:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.901 "is_configured": false, 00:13:14.901 "data_offset": 2048, 00:13:14.901 "data_size": 63488 00:13:14.901 }, 00:13:14.901 { 00:13:14.901 "name": "BaseBdev3", 00:13:14.901 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:14.901 "is_configured": true, 00:13:14.901 "data_offset": 2048, 00:13:14.901 "data_size": 63488 00:13:14.901 }, 00:13:14.901 { 00:13:14.901 "name": "BaseBdev4", 00:13:14.901 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:14.901 "is_configured": true, 00:13:14.901 "data_offset": 2048, 00:13:14.901 "data_size": 63488 00:13:14.901 } 00:13:14.901 ] 00:13:14.901 }' 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.901 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.469 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:15.469 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.469 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.469 [2024-11-28 02:28:48.940491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:15.469 [2024-11-28 02:28:48.940779] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:15.469 [2024-11-28 02:28:48.940797] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:15.469 [2024-11-28 02:28:48.940847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:15.469 [2024-11-28 02:28:48.955275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:15.469 02:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.469 02:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:15.469 [2024-11-28 02:28:48.957540] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:16.407 02:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.407 02:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.407 02:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.407 02:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.407 02:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.407 02:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.407 02:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.408 02:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.408 02:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.408 02:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.408 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.408 "name": "raid_bdev1", 00:13:16.408 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:16.408 "strip_size_kb": 0, 00:13:16.408 "state": "online", 00:13:16.408 "raid_level": "raid1", 00:13:16.408 "superblock": true, 00:13:16.408 "num_base_bdevs": 4, 00:13:16.408 "num_base_bdevs_discovered": 3, 00:13:16.408 "num_base_bdevs_operational": 3, 00:13:16.408 "process": { 00:13:16.408 "type": "rebuild", 00:13:16.408 "target": "spare", 00:13:16.408 "progress": { 00:13:16.408 "blocks": 20480, 00:13:16.408 "percent": 32 00:13:16.408 } 00:13:16.408 }, 00:13:16.408 "base_bdevs_list": [ 00:13:16.408 { 00:13:16.408 "name": "spare", 00:13:16.408 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:16.408 "is_configured": true, 00:13:16.408 "data_offset": 2048, 00:13:16.408 "data_size": 63488 00:13:16.408 }, 00:13:16.408 { 00:13:16.408 "name": null, 00:13:16.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.408 "is_configured": false, 00:13:16.408 "data_offset": 2048, 00:13:16.408 "data_size": 63488 00:13:16.408 }, 00:13:16.408 { 00:13:16.408 "name": "BaseBdev3", 00:13:16.408 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:16.408 "is_configured": true, 00:13:16.408 "data_offset": 2048, 00:13:16.408 "data_size": 63488 00:13:16.408 }, 00:13:16.408 { 00:13:16.408 "name": "BaseBdev4", 00:13:16.408 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:16.408 "is_configured": true, 00:13:16.408 "data_offset": 2048, 00:13:16.408 "data_size": 63488 00:13:16.408 } 00:13:16.408 ] 00:13:16.408 }' 00:13:16.408 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.408 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.408 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.666 [2024-11-28 02:28:50.109546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.666 [2024-11-28 02:28:50.167375] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:16.666 [2024-11-28 02:28:50.167540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.666 [2024-11-28 02:28:50.167585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.666 [2024-11-28 02:28:50.167608] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.666 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.667 "name": "raid_bdev1", 00:13:16.667 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:16.667 "strip_size_kb": 0, 00:13:16.667 "state": "online", 00:13:16.667 "raid_level": "raid1", 00:13:16.667 "superblock": true, 00:13:16.667 "num_base_bdevs": 4, 00:13:16.667 "num_base_bdevs_discovered": 2, 00:13:16.667 "num_base_bdevs_operational": 2, 00:13:16.667 "base_bdevs_list": [ 00:13:16.667 { 00:13:16.667 "name": null, 00:13:16.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.667 "is_configured": false, 00:13:16.667 "data_offset": 0, 00:13:16.667 "data_size": 63488 00:13:16.667 }, 00:13:16.667 { 00:13:16.667 "name": null, 00:13:16.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.667 "is_configured": false, 00:13:16.667 "data_offset": 2048, 00:13:16.667 "data_size": 63488 00:13:16.667 }, 00:13:16.667 { 00:13:16.667 "name": "BaseBdev3", 00:13:16.667 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:16.667 "is_configured": true, 00:13:16.667 "data_offset": 2048, 00:13:16.667 "data_size": 63488 00:13:16.667 }, 00:13:16.667 { 00:13:16.667 "name": "BaseBdev4", 00:13:16.667 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:16.667 "is_configured": true, 00:13:16.667 "data_offset": 2048, 00:13:16.667 "data_size": 63488 00:13:16.667 } 00:13:16.667 ] 00:13:16.667 }' 00:13:16.667 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.667 02:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.925 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:16.925 02:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.925 02:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.925 [2024-11-28 02:28:50.595139] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:16.925 [2024-11-28 02:28:50.595329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.925 [2024-11-28 02:28:50.595391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:16.925 [2024-11-28 02:28:50.595427] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.925 [2024-11-28 02:28:50.596104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.925 [2024-11-28 02:28:50.596172] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:16.925 [2024-11-28 02:28:50.596311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:16.925 [2024-11-28 02:28:50.596327] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:16.925 [2024-11-28 02:28:50.596343] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:16.925 [2024-11-28 02:28:50.596371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.182 spare 00:13:17.182 [2024-11-28 02:28:50.611858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:17.182 02:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.182 02:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:17.182 [2024-11-28 02:28:50.614235] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.170 "name": "raid_bdev1", 00:13:18.170 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:18.170 "strip_size_kb": 0, 00:13:18.170 "state": "online", 00:13:18.170 "raid_level": "raid1", 00:13:18.170 "superblock": true, 00:13:18.170 "num_base_bdevs": 4, 00:13:18.170 "num_base_bdevs_discovered": 3, 00:13:18.170 "num_base_bdevs_operational": 3, 00:13:18.170 "process": { 00:13:18.170 "type": "rebuild", 00:13:18.170 "target": "spare", 00:13:18.170 "progress": { 00:13:18.170 "blocks": 20480, 00:13:18.170 "percent": 32 00:13:18.170 } 00:13:18.170 }, 00:13:18.170 "base_bdevs_list": [ 00:13:18.170 { 00:13:18.170 "name": "spare", 00:13:18.170 "uuid": "f5b79541-1e19-5d7c-8789-b4f9d09638c6", 00:13:18.170 "is_configured": true, 00:13:18.170 "data_offset": 2048, 00:13:18.170 "data_size": 63488 00:13:18.170 }, 00:13:18.170 { 00:13:18.170 "name": null, 00:13:18.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.170 "is_configured": false, 00:13:18.170 "data_offset": 2048, 00:13:18.170 "data_size": 63488 00:13:18.170 }, 00:13:18.170 { 00:13:18.170 "name": "BaseBdev3", 00:13:18.170 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:18.170 "is_configured": true, 00:13:18.170 "data_offset": 2048, 00:13:18.170 "data_size": 63488 00:13:18.170 }, 00:13:18.170 { 00:13:18.170 "name": "BaseBdev4", 00:13:18.170 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:18.170 "is_configured": true, 00:13:18.170 "data_offset": 2048, 00:13:18.170 "data_size": 63488 00:13:18.170 } 00:13:18.170 ] 00:13:18.170 }' 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.170 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.171 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.171 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.171 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:18.171 02:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.171 02:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.171 [2024-11-28 02:28:51.753885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.171 [2024-11-28 02:28:51.824533] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:18.171 [2024-11-28 02:28:51.824749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.171 [2024-11-28 02:28:51.824796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.171 [2024-11-28 02:28:51.824827] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.431 "name": "raid_bdev1", 00:13:18.431 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:18.431 "strip_size_kb": 0, 00:13:18.431 "state": "online", 00:13:18.431 "raid_level": "raid1", 00:13:18.431 "superblock": true, 00:13:18.431 "num_base_bdevs": 4, 00:13:18.431 "num_base_bdevs_discovered": 2, 00:13:18.431 "num_base_bdevs_operational": 2, 00:13:18.431 "base_bdevs_list": [ 00:13:18.431 { 00:13:18.431 "name": null, 00:13:18.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.431 "is_configured": false, 00:13:18.431 "data_offset": 0, 00:13:18.431 "data_size": 63488 00:13:18.431 }, 00:13:18.431 { 00:13:18.431 "name": null, 00:13:18.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.431 "is_configured": false, 00:13:18.431 "data_offset": 2048, 00:13:18.431 "data_size": 63488 00:13:18.431 }, 00:13:18.431 { 00:13:18.431 "name": "BaseBdev3", 00:13:18.431 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:18.431 "is_configured": true, 00:13:18.431 "data_offset": 2048, 00:13:18.431 "data_size": 63488 00:13:18.431 }, 00:13:18.431 { 00:13:18.431 "name": "BaseBdev4", 00:13:18.431 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:18.431 "is_configured": true, 00:13:18.431 "data_offset": 2048, 00:13:18.431 "data_size": 63488 00:13:18.431 } 00:13:18.431 ] 00:13:18.431 }' 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.431 02:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.690 "name": "raid_bdev1", 00:13:18.690 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:18.690 "strip_size_kb": 0, 00:13:18.690 "state": "online", 00:13:18.690 "raid_level": "raid1", 00:13:18.690 "superblock": true, 00:13:18.690 "num_base_bdevs": 4, 00:13:18.690 "num_base_bdevs_discovered": 2, 00:13:18.690 "num_base_bdevs_operational": 2, 00:13:18.690 "base_bdevs_list": [ 00:13:18.690 { 00:13:18.690 "name": null, 00:13:18.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.690 "is_configured": false, 00:13:18.690 "data_offset": 0, 00:13:18.690 "data_size": 63488 00:13:18.690 }, 00:13:18.690 { 00:13:18.690 "name": null, 00:13:18.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.690 "is_configured": false, 00:13:18.690 "data_offset": 2048, 00:13:18.690 "data_size": 63488 00:13:18.690 }, 00:13:18.690 { 00:13:18.690 "name": "BaseBdev3", 00:13:18.690 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:18.690 "is_configured": true, 00:13:18.690 "data_offset": 2048, 00:13:18.690 "data_size": 63488 00:13:18.690 }, 00:13:18.690 { 00:13:18.690 "name": "BaseBdev4", 00:13:18.690 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:18.690 "is_configured": true, 00:13:18.690 "data_offset": 2048, 00:13:18.690 "data_size": 63488 00:13:18.690 } 00:13:18.690 ] 00:13:18.690 }' 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.690 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.949 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.949 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:18.949 02:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.949 02:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.949 02:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.949 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:18.949 02:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.949 02:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.949 [2024-11-28 02:28:52.413836] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:18.949 [2024-11-28 02:28:52.413939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.949 [2024-11-28 02:28:52.413964] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:18.949 [2024-11-28 02:28:52.413978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.949 [2024-11-28 02:28:52.414515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.949 [2024-11-28 02:28:52.414537] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:18.949 [2024-11-28 02:28:52.414627] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:18.949 [2024-11-28 02:28:52.414644] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:18.949 [2024-11-28 02:28:52.414654] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:18.949 [2024-11-28 02:28:52.414683] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:18.950 BaseBdev1 00:13:18.950 02:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.950 02:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.887 "name": "raid_bdev1", 00:13:19.887 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:19.887 "strip_size_kb": 0, 00:13:19.887 "state": "online", 00:13:19.887 "raid_level": "raid1", 00:13:19.887 "superblock": true, 00:13:19.887 "num_base_bdevs": 4, 00:13:19.887 "num_base_bdevs_discovered": 2, 00:13:19.887 "num_base_bdevs_operational": 2, 00:13:19.887 "base_bdevs_list": [ 00:13:19.887 { 00:13:19.887 "name": null, 00:13:19.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.887 "is_configured": false, 00:13:19.887 "data_offset": 0, 00:13:19.887 "data_size": 63488 00:13:19.887 }, 00:13:19.887 { 00:13:19.887 "name": null, 00:13:19.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.887 "is_configured": false, 00:13:19.887 "data_offset": 2048, 00:13:19.887 "data_size": 63488 00:13:19.887 }, 00:13:19.887 { 00:13:19.887 "name": "BaseBdev3", 00:13:19.887 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:19.887 "is_configured": true, 00:13:19.887 "data_offset": 2048, 00:13:19.887 "data_size": 63488 00:13:19.887 }, 00:13:19.887 { 00:13:19.887 "name": "BaseBdev4", 00:13:19.887 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:19.887 "is_configured": true, 00:13:19.887 "data_offset": 2048, 00:13:19.887 "data_size": 63488 00:13:19.887 } 00:13:19.887 ] 00:13:19.887 }' 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.887 02:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.455 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.455 "name": "raid_bdev1", 00:13:20.455 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:20.455 "strip_size_kb": 0, 00:13:20.455 "state": "online", 00:13:20.455 "raid_level": "raid1", 00:13:20.455 "superblock": true, 00:13:20.455 "num_base_bdevs": 4, 00:13:20.455 "num_base_bdevs_discovered": 2, 00:13:20.455 "num_base_bdevs_operational": 2, 00:13:20.455 "base_bdevs_list": [ 00:13:20.455 { 00:13:20.455 "name": null, 00:13:20.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.455 "is_configured": false, 00:13:20.455 "data_offset": 0, 00:13:20.455 "data_size": 63488 00:13:20.455 }, 00:13:20.455 { 00:13:20.455 "name": null, 00:13:20.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.455 "is_configured": false, 00:13:20.455 "data_offset": 2048, 00:13:20.456 "data_size": 63488 00:13:20.456 }, 00:13:20.456 { 00:13:20.456 "name": "BaseBdev3", 00:13:20.456 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:20.456 "is_configured": true, 00:13:20.456 "data_offset": 2048, 00:13:20.456 "data_size": 63488 00:13:20.456 }, 00:13:20.456 { 00:13:20.456 "name": "BaseBdev4", 00:13:20.456 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:20.456 "is_configured": true, 00:13:20.456 "data_offset": 2048, 00:13:20.456 "data_size": 63488 00:13:20.456 } 00:13:20.456 ] 00:13:20.456 }' 00:13:20.456 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.456 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.456 02:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.456 [2024-11-28 02:28:54.011356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.456 [2024-11-28 02:28:54.011724] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:20.456 [2024-11-28 02:28:54.011796] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:20.456 request: 00:13:20.456 { 00:13:20.456 "base_bdev": "BaseBdev1", 00:13:20.456 "raid_bdev": "raid_bdev1", 00:13:20.456 "method": "bdev_raid_add_base_bdev", 00:13:20.456 "req_id": 1 00:13:20.456 } 00:13:20.456 Got JSON-RPC error response 00:13:20.456 response: 00:13:20.456 { 00:13:20.456 "code": -22, 00:13:20.456 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:20.456 } 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.456 02:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.393 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.651 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.651 "name": "raid_bdev1", 00:13:21.651 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:21.651 "strip_size_kb": 0, 00:13:21.651 "state": "online", 00:13:21.651 "raid_level": "raid1", 00:13:21.651 "superblock": true, 00:13:21.651 "num_base_bdevs": 4, 00:13:21.651 "num_base_bdevs_discovered": 2, 00:13:21.651 "num_base_bdevs_operational": 2, 00:13:21.651 "base_bdevs_list": [ 00:13:21.651 { 00:13:21.651 "name": null, 00:13:21.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.651 "is_configured": false, 00:13:21.651 "data_offset": 0, 00:13:21.651 "data_size": 63488 00:13:21.651 }, 00:13:21.651 { 00:13:21.651 "name": null, 00:13:21.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.651 "is_configured": false, 00:13:21.651 "data_offset": 2048, 00:13:21.651 "data_size": 63488 00:13:21.651 }, 00:13:21.651 { 00:13:21.651 "name": "BaseBdev3", 00:13:21.651 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:21.651 "is_configured": true, 00:13:21.651 "data_offset": 2048, 00:13:21.651 "data_size": 63488 00:13:21.651 }, 00:13:21.651 { 00:13:21.651 "name": "BaseBdev4", 00:13:21.651 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:21.651 "is_configured": true, 00:13:21.651 "data_offset": 2048, 00:13:21.651 "data_size": 63488 00:13:21.651 } 00:13:21.651 ] 00:13:21.651 }' 00:13:21.651 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.651 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.910 "name": "raid_bdev1", 00:13:21.910 "uuid": "251c2d04-5e3f-4c3f-8f6b-bda1791a68c6", 00:13:21.910 "strip_size_kb": 0, 00:13:21.910 "state": "online", 00:13:21.910 "raid_level": "raid1", 00:13:21.910 "superblock": true, 00:13:21.910 "num_base_bdevs": 4, 00:13:21.910 "num_base_bdevs_discovered": 2, 00:13:21.910 "num_base_bdevs_operational": 2, 00:13:21.910 "base_bdevs_list": [ 00:13:21.910 { 00:13:21.910 "name": null, 00:13:21.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.910 "is_configured": false, 00:13:21.910 "data_offset": 0, 00:13:21.910 "data_size": 63488 00:13:21.910 }, 00:13:21.910 { 00:13:21.910 "name": null, 00:13:21.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.910 "is_configured": false, 00:13:21.910 "data_offset": 2048, 00:13:21.910 "data_size": 63488 00:13:21.910 }, 00:13:21.910 { 00:13:21.910 "name": "BaseBdev3", 00:13:21.910 "uuid": "2502c60d-334e-5946-92d6-93f624611703", 00:13:21.910 "is_configured": true, 00:13:21.910 "data_offset": 2048, 00:13:21.910 "data_size": 63488 00:13:21.910 }, 00:13:21.910 { 00:13:21.910 "name": "BaseBdev4", 00:13:21.910 "uuid": "4f791b57-0607-5415-b3b8-5204f98364ca", 00:13:21.910 "is_configured": true, 00:13:21.910 "data_offset": 2048, 00:13:21.910 "data_size": 63488 00:13:21.910 } 00:13:21.910 ] 00:13:21.910 }' 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77744 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77744 ']' 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77744 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.910 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77744 00:13:22.169 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.169 killing process with pid 77744 00:13:22.169 Received shutdown signal, test time was about 60.000000 seconds 00:13:22.169 00:13:22.169 Latency(us) 00:13:22.169 [2024-11-28T02:28:55.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.169 [2024-11-28T02:28:55.848Z] =================================================================================================================== 00:13:22.169 [2024-11-28T02:28:55.848Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:22.169 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.169 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77744' 00:13:22.169 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77744 00:13:22.169 [2024-11-28 02:28:55.601152] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:22.169 02:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77744 00:13:22.169 [2024-11-28 02:28:55.601326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.169 [2024-11-28 02:28:55.601419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.169 [2024-11-28 02:28:55.601454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:22.738 [2024-11-28 02:28:56.238264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.117 02:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:24.117 00:13:24.117 real 0m25.636s 00:13:24.117 user 0m30.579s 00:13:24.117 sys 0m4.028s 00:13:24.117 02:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.117 02:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.117 ************************************ 00:13:24.117 END TEST raid_rebuild_test_sb 00:13:24.117 ************************************ 00:13:24.117 02:28:57 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:24.117 02:28:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:24.118 02:28:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.118 02:28:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.118 ************************************ 00:13:24.118 START TEST raid_rebuild_test_io 00:13:24.118 ************************************ 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78504 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78504 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78504 ']' 00:13:24.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.118 02:28:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.377 [2024-11-28 02:28:57.888116] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:24.377 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:24.377 Zero copy mechanism will not be used. 00:13:24.377 [2024-11-28 02:28:57.888366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78504 ] 00:13:24.637 [2024-11-28 02:28:58.069643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.637 [2024-11-28 02:28:58.228918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.896 [2024-11-28 02:28:58.511473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.896 [2024-11-28 02:28:58.511528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.155 BaseBdev1_malloc 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.155 [2024-11-28 02:28:58.764772] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:25.155 [2024-11-28 02:28:58.764858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.155 [2024-11-28 02:28:58.764907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:25.155 [2024-11-28 02:28:58.764943] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.155 [2024-11-28 02:28:58.767574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.155 [2024-11-28 02:28:58.767617] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:25.155 BaseBdev1 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.155 BaseBdev2_malloc 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.155 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 [2024-11-28 02:28:58.835713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:25.415 [2024-11-28 02:28:58.835810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.415 [2024-11-28 02:28:58.835841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:25.415 [2024-11-28 02:28:58.835857] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.415 [2024-11-28 02:28:58.838648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.415 [2024-11-28 02:28:58.838691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:25.415 BaseBdev2 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 BaseBdev3_malloc 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 [2024-11-28 02:28:58.920455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:25.415 [2024-11-28 02:28:58.920531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.415 [2024-11-28 02:28:58.920559] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:25.415 [2024-11-28 02:28:58.920574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.415 [2024-11-28 02:28:58.923437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.415 [2024-11-28 02:28:58.923485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:25.415 BaseBdev3 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 BaseBdev4_malloc 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 [2024-11-28 02:28:58.989409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:25.415 [2024-11-28 02:28:58.989481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.415 [2024-11-28 02:28:58.989506] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:25.415 [2024-11-28 02:28:58.989518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.415 [2024-11-28 02:28:58.992227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.415 [2024-11-28 02:28:58.992275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:25.415 BaseBdev4 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.415 02:28:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 spare_malloc 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 spare_delay 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 [2024-11-28 02:28:59.070557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:25.415 [2024-11-28 02:28:59.070618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.415 [2024-11-28 02:28:59.070637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:25.415 [2024-11-28 02:28:59.070650] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.415 [2024-11-28 02:28:59.073494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.415 [2024-11-28 02:28:59.073541] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:25.415 spare 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 [2024-11-28 02:28:59.082577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.415 [2024-11-28 02:28:59.085103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.415 [2024-11-28 02:28:59.085197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.415 [2024-11-28 02:28:59.085260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:25.415 [2024-11-28 02:28:59.085363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:25.415 [2024-11-28 02:28:59.085379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:25.415 [2024-11-28 02:28:59.085721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:25.415 [2024-11-28 02:28:59.085951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:25.415 [2024-11-28 02:28:59.085967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:25.415 [2024-11-28 02:28:59.086180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.415 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.416 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.416 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.416 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.416 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.416 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.674 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.674 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.674 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.674 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.674 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.674 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.674 "name": "raid_bdev1", 00:13:25.674 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:25.674 "strip_size_kb": 0, 00:13:25.674 "state": "online", 00:13:25.674 "raid_level": "raid1", 00:13:25.674 "superblock": false, 00:13:25.674 "num_base_bdevs": 4, 00:13:25.674 "num_base_bdevs_discovered": 4, 00:13:25.674 "num_base_bdevs_operational": 4, 00:13:25.674 "base_bdevs_list": [ 00:13:25.674 { 00:13:25.674 "name": "BaseBdev1", 00:13:25.674 "uuid": "b85ce4e6-fb84-55ae-a72b-7c26e3cd88f4", 00:13:25.674 "is_configured": true, 00:13:25.674 "data_offset": 0, 00:13:25.674 "data_size": 65536 00:13:25.674 }, 00:13:25.674 { 00:13:25.674 "name": "BaseBdev2", 00:13:25.674 "uuid": "2557c52f-8fd4-533e-b8e7-cc4f76f02c35", 00:13:25.674 "is_configured": true, 00:13:25.674 "data_offset": 0, 00:13:25.674 "data_size": 65536 00:13:25.674 }, 00:13:25.674 { 00:13:25.674 "name": "BaseBdev3", 00:13:25.674 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:25.674 "is_configured": true, 00:13:25.674 "data_offset": 0, 00:13:25.674 "data_size": 65536 00:13:25.674 }, 00:13:25.674 { 00:13:25.674 "name": "BaseBdev4", 00:13:25.674 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:25.674 "is_configured": true, 00:13:25.674 "data_offset": 0, 00:13:25.674 "data_size": 65536 00:13:25.674 } 00:13:25.674 ] 00:13:25.674 }' 00:13:25.674 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.674 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.934 [2024-11-28 02:28:59.538317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.934 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.193 [2024-11-28 02:28:59.625654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.193 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.193 "name": "raid_bdev1", 00:13:26.193 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:26.193 "strip_size_kb": 0, 00:13:26.193 "state": "online", 00:13:26.193 "raid_level": "raid1", 00:13:26.193 "superblock": false, 00:13:26.193 "num_base_bdevs": 4, 00:13:26.193 "num_base_bdevs_discovered": 3, 00:13:26.193 "num_base_bdevs_operational": 3, 00:13:26.193 "base_bdevs_list": [ 00:13:26.193 { 00:13:26.193 "name": null, 00:13:26.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.193 "is_configured": false, 00:13:26.193 "data_offset": 0, 00:13:26.193 "data_size": 65536 00:13:26.193 }, 00:13:26.193 { 00:13:26.193 "name": "BaseBdev2", 00:13:26.193 "uuid": "2557c52f-8fd4-533e-b8e7-cc4f76f02c35", 00:13:26.193 "is_configured": true, 00:13:26.193 "data_offset": 0, 00:13:26.193 "data_size": 65536 00:13:26.193 }, 00:13:26.193 { 00:13:26.193 "name": "BaseBdev3", 00:13:26.193 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:26.193 "is_configured": true, 00:13:26.193 "data_offset": 0, 00:13:26.193 "data_size": 65536 00:13:26.193 }, 00:13:26.193 { 00:13:26.194 "name": "BaseBdev4", 00:13:26.194 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:26.194 "is_configured": true, 00:13:26.194 "data_offset": 0, 00:13:26.194 "data_size": 65536 00:13:26.194 } 00:13:26.194 ] 00:13:26.194 }' 00:13:26.194 02:28:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.194 02:28:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.194 [2024-11-28 02:28:59.720778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:26.194 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:26.194 Zero copy mechanism will not be used. 00:13:26.194 Running I/O for 60 seconds... 00:13:26.453 02:29:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.453 02:29:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.453 02:29:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.453 [2024-11-28 02:29:00.082687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.453 02:29:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.453 02:29:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:26.711 [2024-11-28 02:29:00.155947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:26.711 [2024-11-28 02:29:00.158249] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.711 [2024-11-28 02:29:00.290318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:26.711 [2024-11-28 02:29:00.291286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:26.970 [2024-11-28 02:29:00.412395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:26.970 [2024-11-28 02:29:00.412891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:27.229 141.00 IOPS, 423.00 MiB/s [2024-11-28T02:29:00.908Z] [2024-11-28 02:29:00.778911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:27.490 [2024-11-28 02:29:00.935082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:27.490 [2024-11-28 02:29:00.941789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:27.490 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.490 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.490 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.490 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.490 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.490 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.490 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.490 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.490 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.490 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.749 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.749 "name": "raid_bdev1", 00:13:27.749 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:27.749 "strip_size_kb": 0, 00:13:27.749 "state": "online", 00:13:27.749 "raid_level": "raid1", 00:13:27.749 "superblock": false, 00:13:27.749 "num_base_bdevs": 4, 00:13:27.749 "num_base_bdevs_discovered": 4, 00:13:27.749 "num_base_bdevs_operational": 4, 00:13:27.749 "process": { 00:13:27.749 "type": "rebuild", 00:13:27.749 "target": "spare", 00:13:27.749 "progress": { 00:13:27.749 "blocks": 10240, 00:13:27.749 "percent": 15 00:13:27.749 } 00:13:27.749 }, 00:13:27.749 "base_bdevs_list": [ 00:13:27.749 { 00:13:27.749 "name": "spare", 00:13:27.749 "uuid": "16076076-3067-5273-b730-d4a7a07ebcc5", 00:13:27.749 "is_configured": true, 00:13:27.749 "data_offset": 0, 00:13:27.749 "data_size": 65536 00:13:27.749 }, 00:13:27.749 { 00:13:27.749 "name": "BaseBdev2", 00:13:27.749 "uuid": "2557c52f-8fd4-533e-b8e7-cc4f76f02c35", 00:13:27.749 "is_configured": true, 00:13:27.749 "data_offset": 0, 00:13:27.749 "data_size": 65536 00:13:27.749 }, 00:13:27.749 { 00:13:27.749 "name": "BaseBdev3", 00:13:27.749 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:27.749 "is_configured": true, 00:13:27.749 "data_offset": 0, 00:13:27.749 "data_size": 65536 00:13:27.749 }, 00:13:27.749 { 00:13:27.749 "name": "BaseBdev4", 00:13:27.749 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:27.749 "is_configured": true, 00:13:27.749 "data_offset": 0, 00:13:27.749 "data_size": 65536 00:13:27.749 } 00:13:27.749 ] 00:13:27.749 }' 00:13:27.749 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.749 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.749 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.749 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.749 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:27.749 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.749 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.749 [2024-11-28 02:29:01.269908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.009 [2024-11-28 02:29:01.429195] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:28.009 [2024-11-28 02:29:01.435661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.009 [2024-11-28 02:29:01.435721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.009 [2024-11-28 02:29:01.435736] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.009 [2024-11-28 02:29:01.466775] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.009 "name": "raid_bdev1", 00:13:28.009 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:28.009 "strip_size_kb": 0, 00:13:28.009 "state": "online", 00:13:28.009 "raid_level": "raid1", 00:13:28.009 "superblock": false, 00:13:28.009 "num_base_bdevs": 4, 00:13:28.009 "num_base_bdevs_discovered": 3, 00:13:28.009 "num_base_bdevs_operational": 3, 00:13:28.009 "base_bdevs_list": [ 00:13:28.009 { 00:13:28.009 "name": null, 00:13:28.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.009 "is_configured": false, 00:13:28.009 "data_offset": 0, 00:13:28.009 "data_size": 65536 00:13:28.009 }, 00:13:28.009 { 00:13:28.009 "name": "BaseBdev2", 00:13:28.009 "uuid": "2557c52f-8fd4-533e-b8e7-cc4f76f02c35", 00:13:28.009 "is_configured": true, 00:13:28.009 "data_offset": 0, 00:13:28.009 "data_size": 65536 00:13:28.009 }, 00:13:28.009 { 00:13:28.009 "name": "BaseBdev3", 00:13:28.009 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:28.009 "is_configured": true, 00:13:28.009 "data_offset": 0, 00:13:28.009 "data_size": 65536 00:13:28.009 }, 00:13:28.009 { 00:13:28.009 "name": "BaseBdev4", 00:13:28.009 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:28.009 "is_configured": true, 00:13:28.009 "data_offset": 0, 00:13:28.009 "data_size": 65536 00:13:28.009 } 00:13:28.009 ] 00:13:28.009 }' 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.009 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.527 135.50 IOPS, 406.50 MiB/s [2024-11-28T02:29:02.206Z] 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.527 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.527 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.527 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.527 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.527 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.527 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.527 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.527 02:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.527 02:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.527 02:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.527 "name": "raid_bdev1", 00:13:28.527 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:28.527 "strip_size_kb": 0, 00:13:28.527 "state": "online", 00:13:28.527 "raid_level": "raid1", 00:13:28.527 "superblock": false, 00:13:28.527 "num_base_bdevs": 4, 00:13:28.527 "num_base_bdevs_discovered": 3, 00:13:28.527 "num_base_bdevs_operational": 3, 00:13:28.527 "base_bdevs_list": [ 00:13:28.527 { 00:13:28.527 "name": null, 00:13:28.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.527 "is_configured": false, 00:13:28.527 "data_offset": 0, 00:13:28.527 "data_size": 65536 00:13:28.527 }, 00:13:28.527 { 00:13:28.527 "name": "BaseBdev2", 00:13:28.527 "uuid": "2557c52f-8fd4-533e-b8e7-cc4f76f02c35", 00:13:28.527 "is_configured": true, 00:13:28.527 "data_offset": 0, 00:13:28.527 "data_size": 65536 00:13:28.527 }, 00:13:28.527 { 00:13:28.527 "name": "BaseBdev3", 00:13:28.527 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:28.527 "is_configured": true, 00:13:28.527 "data_offset": 0, 00:13:28.527 "data_size": 65536 00:13:28.527 }, 00:13:28.527 { 00:13:28.527 "name": "BaseBdev4", 00:13:28.527 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:28.527 "is_configured": true, 00:13:28.527 "data_offset": 0, 00:13:28.527 "data_size": 65536 00:13:28.527 } 00:13:28.527 ] 00:13:28.527 }' 00:13:28.527 02:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.527 02:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.527 02:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.527 02:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.527 02:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:28.527 02:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.527 02:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.527 [2024-11-28 02:29:02.119106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.527 02:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.527 02:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:28.527 [2024-11-28 02:29:02.185846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:28.527 [2024-11-28 02:29:02.188705] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.786 [2024-11-28 02:29:02.328260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:29.045 [2024-11-28 02:29:02.564671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:29.045 [2024-11-28 02:29:02.565632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:29.303 141.67 IOPS, 425.00 MiB/s [2024-11-28T02:29:02.982Z] [2024-11-28 02:29:02.912635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:29.303 [2024-11-28 02:29:02.913312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:29.561 [2024-11-28 02:29:03.051895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.561 "name": "raid_bdev1", 00:13:29.561 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:29.561 "strip_size_kb": 0, 00:13:29.561 "state": "online", 00:13:29.561 "raid_level": "raid1", 00:13:29.561 "superblock": false, 00:13:29.561 "num_base_bdevs": 4, 00:13:29.561 "num_base_bdevs_discovered": 4, 00:13:29.561 "num_base_bdevs_operational": 4, 00:13:29.561 "process": { 00:13:29.561 "type": "rebuild", 00:13:29.561 "target": "spare", 00:13:29.561 "progress": { 00:13:29.561 "blocks": 10240, 00:13:29.561 "percent": 15 00:13:29.561 } 00:13:29.561 }, 00:13:29.561 "base_bdevs_list": [ 00:13:29.561 { 00:13:29.561 "name": "spare", 00:13:29.561 "uuid": "16076076-3067-5273-b730-d4a7a07ebcc5", 00:13:29.561 "is_configured": true, 00:13:29.561 "data_offset": 0, 00:13:29.561 "data_size": 65536 00:13:29.561 }, 00:13:29.561 { 00:13:29.561 "name": "BaseBdev2", 00:13:29.561 "uuid": "2557c52f-8fd4-533e-b8e7-cc4f76f02c35", 00:13:29.561 "is_configured": true, 00:13:29.561 "data_offset": 0, 00:13:29.561 "data_size": 65536 00:13:29.561 }, 00:13:29.561 { 00:13:29.561 "name": "BaseBdev3", 00:13:29.561 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:29.561 "is_configured": true, 00:13:29.561 "data_offset": 0, 00:13:29.561 "data_size": 65536 00:13:29.561 }, 00:13:29.561 { 00:13:29.561 "name": "BaseBdev4", 00:13:29.561 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:29.561 "is_configured": true, 00:13:29.561 "data_offset": 0, 00:13:29.561 "data_size": 65536 00:13:29.561 } 00:13:29.561 ] 00:13:29.561 }' 00:13:29.561 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.820 [2024-11-28 02:29:03.325046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.820 [2024-11-28 02:29:03.387665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:29.820 [2024-11-28 02:29:03.422099] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:29.820 [2024-11-28 02:29:03.422172] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:29.820 [2024-11-28 02:29:03.431386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:29.820 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.821 "name": "raid_bdev1", 00:13:29.821 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:29.821 "strip_size_kb": 0, 00:13:29.821 "state": "online", 00:13:29.821 "raid_level": "raid1", 00:13:29.821 "superblock": false, 00:13:29.821 "num_base_bdevs": 4, 00:13:29.821 "num_base_bdevs_discovered": 3, 00:13:29.821 "num_base_bdevs_operational": 3, 00:13:29.821 "process": { 00:13:29.821 "type": "rebuild", 00:13:29.821 "target": "spare", 00:13:29.821 "progress": { 00:13:29.821 "blocks": 14336, 00:13:29.821 "percent": 21 00:13:29.821 } 00:13:29.821 }, 00:13:29.821 "base_bdevs_list": [ 00:13:29.821 { 00:13:29.821 "name": "spare", 00:13:29.821 "uuid": "16076076-3067-5273-b730-d4a7a07ebcc5", 00:13:29.821 "is_configured": true, 00:13:29.821 "data_offset": 0, 00:13:29.821 "data_size": 65536 00:13:29.821 }, 00:13:29.821 { 00:13:29.821 "name": null, 00:13:29.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.821 "is_configured": false, 00:13:29.821 "data_offset": 0, 00:13:29.821 "data_size": 65536 00:13:29.821 }, 00:13:29.821 { 00:13:29.821 "name": "BaseBdev3", 00:13:29.821 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:29.821 "is_configured": true, 00:13:29.821 "data_offset": 0, 00:13:29.821 "data_size": 65536 00:13:29.821 }, 00:13:29.821 { 00:13:29.821 "name": "BaseBdev4", 00:13:29.821 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:29.821 "is_configured": true, 00:13:29.821 "data_offset": 0, 00:13:29.821 "data_size": 65536 00:13:29.821 } 00:13:29.821 ] 00:13:29.821 }' 00:13:29.821 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=473 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.080 "name": "raid_bdev1", 00:13:30.080 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:30.080 "strip_size_kb": 0, 00:13:30.080 "state": "online", 00:13:30.080 "raid_level": "raid1", 00:13:30.080 "superblock": false, 00:13:30.080 "num_base_bdevs": 4, 00:13:30.080 "num_base_bdevs_discovered": 3, 00:13:30.080 "num_base_bdevs_operational": 3, 00:13:30.080 "process": { 00:13:30.080 "type": "rebuild", 00:13:30.080 "target": "spare", 00:13:30.080 "progress": { 00:13:30.080 "blocks": 16384, 00:13:30.080 "percent": 25 00:13:30.080 } 00:13:30.080 }, 00:13:30.080 "base_bdevs_list": [ 00:13:30.080 { 00:13:30.080 "name": "spare", 00:13:30.080 "uuid": "16076076-3067-5273-b730-d4a7a07ebcc5", 00:13:30.080 "is_configured": true, 00:13:30.080 "data_offset": 0, 00:13:30.080 "data_size": 65536 00:13:30.080 }, 00:13:30.080 { 00:13:30.080 "name": null, 00:13:30.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.080 "is_configured": false, 00:13:30.080 "data_offset": 0, 00:13:30.080 "data_size": 65536 00:13:30.080 }, 00:13:30.080 { 00:13:30.080 "name": "BaseBdev3", 00:13:30.080 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:30.080 "is_configured": true, 00:13:30.080 "data_offset": 0, 00:13:30.080 "data_size": 65536 00:13:30.080 }, 00:13:30.080 { 00:13:30.080 "name": "BaseBdev4", 00:13:30.080 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:30.080 "is_configured": true, 00:13:30.080 "data_offset": 0, 00:13:30.080 "data_size": 65536 00:13:30.080 } 00:13:30.080 ] 00:13:30.080 }' 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.080 02:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:30.339 124.00 IOPS, 372.00 MiB/s [2024-11-28T02:29:04.018Z] [2024-11-28 02:29:03.849612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:30.598 [2024-11-28 02:29:04.059595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:30.598 [2024-11-28 02:29:04.261910] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:30.598 [2024-11-28 02:29:04.262265] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:31.166 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.167 108.40 IOPS, 325.20 MiB/s [2024-11-28T02:29:04.846Z] 02:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.167 "name": "raid_bdev1", 00:13:31.167 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:31.167 "strip_size_kb": 0, 00:13:31.167 "state": "online", 00:13:31.167 "raid_level": "raid1", 00:13:31.167 "superblock": false, 00:13:31.167 "num_base_bdevs": 4, 00:13:31.167 "num_base_bdevs_discovered": 3, 00:13:31.167 "num_base_bdevs_operational": 3, 00:13:31.167 "process": { 00:13:31.167 "type": "rebuild", 00:13:31.167 "target": "spare", 00:13:31.167 "progress": { 00:13:31.167 "blocks": 34816, 00:13:31.167 "percent": 53 00:13:31.167 } 00:13:31.167 }, 00:13:31.167 "base_bdevs_list": [ 00:13:31.167 { 00:13:31.167 "name": "spare", 00:13:31.167 "uuid": "16076076-3067-5273-b730-d4a7a07ebcc5", 00:13:31.167 "is_configured": true, 00:13:31.167 "data_offset": 0, 00:13:31.167 "data_size": 65536 00:13:31.167 }, 00:13:31.167 { 00:13:31.167 "name": null, 00:13:31.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.167 "is_configured": false, 00:13:31.167 "data_offset": 0, 00:13:31.167 "data_size": 65536 00:13:31.167 }, 00:13:31.167 { 00:13:31.167 "name": "BaseBdev3", 00:13:31.167 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:31.167 "is_configured": true, 00:13:31.167 "data_offset": 0, 00:13:31.167 "data_size": 65536 00:13:31.167 }, 00:13:31.167 { 00:13:31.167 "name": "BaseBdev4", 00:13:31.167 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:31.167 "is_configured": true, 00:13:31.167 "data_offset": 0, 00:13:31.167 "data_size": 65536 00:13:31.167 } 00:13:31.167 ] 00:13:31.167 }' 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.167 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.426 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.426 02:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.684 [2024-11-28 02:29:05.155952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:31.942 [2024-11-28 02:29:05.373567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:32.199 96.83 IOPS, 290.50 MiB/s [2024-11-28T02:29:05.878Z] 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.199 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.199 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.199 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.199 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.199 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.199 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.199 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.199 02:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.199 02:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.458 02:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.458 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.458 "name": "raid_bdev1", 00:13:32.458 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:32.458 "strip_size_kb": 0, 00:13:32.458 "state": "online", 00:13:32.458 "raid_level": "raid1", 00:13:32.458 "superblock": false, 00:13:32.458 "num_base_bdevs": 4, 00:13:32.458 "num_base_bdevs_discovered": 3, 00:13:32.458 "num_base_bdevs_operational": 3, 00:13:32.458 "process": { 00:13:32.458 "type": "rebuild", 00:13:32.458 "target": "spare", 00:13:32.458 "progress": { 00:13:32.458 "blocks": 55296, 00:13:32.458 "percent": 84 00:13:32.458 } 00:13:32.458 }, 00:13:32.458 "base_bdevs_list": [ 00:13:32.458 { 00:13:32.458 "name": "spare", 00:13:32.458 "uuid": "16076076-3067-5273-b730-d4a7a07ebcc5", 00:13:32.458 "is_configured": true, 00:13:32.458 "data_offset": 0, 00:13:32.458 "data_size": 65536 00:13:32.458 }, 00:13:32.458 { 00:13:32.458 "name": null, 00:13:32.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.458 "is_configured": false, 00:13:32.458 "data_offset": 0, 00:13:32.458 "data_size": 65536 00:13:32.458 }, 00:13:32.458 { 00:13:32.458 "name": "BaseBdev3", 00:13:32.458 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:32.458 "is_configured": true, 00:13:32.458 "data_offset": 0, 00:13:32.458 "data_size": 65536 00:13:32.458 }, 00:13:32.458 { 00:13:32.458 "name": "BaseBdev4", 00:13:32.458 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:32.458 "is_configured": true, 00:13:32.458 "data_offset": 0, 00:13:32.458 "data_size": 65536 00:13:32.458 } 00:13:32.458 ] 00:13:32.458 }' 00:13:32.458 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.458 [2024-11-28 02:29:05.935696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:32.458 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.458 02:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.458 02:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.458 02:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.717 [2024-11-28 02:29:06.152893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:32.976 [2024-11-28 02:29:06.583973] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:33.235 [2024-11-28 02:29:06.689164] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:33.235 [2024-11-28 02:29:06.691957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.494 86.71 IOPS, 260.14 MiB/s [2024-11-28T02:29:07.174Z] 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.495 "name": "raid_bdev1", 00:13:33.495 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:33.495 "strip_size_kb": 0, 00:13:33.495 "state": "online", 00:13:33.495 "raid_level": "raid1", 00:13:33.495 "superblock": false, 00:13:33.495 "num_base_bdevs": 4, 00:13:33.495 "num_base_bdevs_discovered": 3, 00:13:33.495 "num_base_bdevs_operational": 3, 00:13:33.495 "base_bdevs_list": [ 00:13:33.495 { 00:13:33.495 "name": "spare", 00:13:33.495 "uuid": "16076076-3067-5273-b730-d4a7a07ebcc5", 00:13:33.495 "is_configured": true, 00:13:33.495 "data_offset": 0, 00:13:33.495 "data_size": 65536 00:13:33.495 }, 00:13:33.495 { 00:13:33.495 "name": null, 00:13:33.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.495 "is_configured": false, 00:13:33.495 "data_offset": 0, 00:13:33.495 "data_size": 65536 00:13:33.495 }, 00:13:33.495 { 00:13:33.495 "name": "BaseBdev3", 00:13:33.495 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:33.495 "is_configured": true, 00:13:33.495 "data_offset": 0, 00:13:33.495 "data_size": 65536 00:13:33.495 }, 00:13:33.495 { 00:13:33.495 "name": "BaseBdev4", 00:13:33.495 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:33.495 "is_configured": true, 00:13:33.495 "data_offset": 0, 00:13:33.495 "data_size": 65536 00:13:33.495 } 00:13:33.495 ] 00:13:33.495 }' 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.495 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.754 "name": "raid_bdev1", 00:13:33.754 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:33.754 "strip_size_kb": 0, 00:13:33.754 "state": "online", 00:13:33.754 "raid_level": "raid1", 00:13:33.754 "superblock": false, 00:13:33.754 "num_base_bdevs": 4, 00:13:33.754 "num_base_bdevs_discovered": 3, 00:13:33.754 "num_base_bdevs_operational": 3, 00:13:33.754 "base_bdevs_list": [ 00:13:33.754 { 00:13:33.754 "name": "spare", 00:13:33.754 "uuid": "16076076-3067-5273-b730-d4a7a07ebcc5", 00:13:33.754 "is_configured": true, 00:13:33.754 "data_offset": 0, 00:13:33.754 "data_size": 65536 00:13:33.754 }, 00:13:33.754 { 00:13:33.754 "name": null, 00:13:33.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.754 "is_configured": false, 00:13:33.754 "data_offset": 0, 00:13:33.754 "data_size": 65536 00:13:33.754 }, 00:13:33.754 { 00:13:33.754 "name": "BaseBdev3", 00:13:33.754 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:33.754 "is_configured": true, 00:13:33.754 "data_offset": 0, 00:13:33.754 "data_size": 65536 00:13:33.754 }, 00:13:33.754 { 00:13:33.754 "name": "BaseBdev4", 00:13:33.754 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:33.754 "is_configured": true, 00:13:33.754 "data_offset": 0, 00:13:33.754 "data_size": 65536 00:13:33.754 } 00:13:33.754 ] 00:13:33.754 }' 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.754 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.755 "name": "raid_bdev1", 00:13:33.755 "uuid": "73293e2b-c79f-410c-8a36-8a7bcd56362c", 00:13:33.755 "strip_size_kb": 0, 00:13:33.755 "state": "online", 00:13:33.755 "raid_level": "raid1", 00:13:33.755 "superblock": false, 00:13:33.755 "num_base_bdevs": 4, 00:13:33.755 "num_base_bdevs_discovered": 3, 00:13:33.755 "num_base_bdevs_operational": 3, 00:13:33.755 "base_bdevs_list": [ 00:13:33.755 { 00:13:33.755 "name": "spare", 00:13:33.755 "uuid": "16076076-3067-5273-b730-d4a7a07ebcc5", 00:13:33.755 "is_configured": true, 00:13:33.755 "data_offset": 0, 00:13:33.755 "data_size": 65536 00:13:33.755 }, 00:13:33.755 { 00:13:33.755 "name": null, 00:13:33.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.755 "is_configured": false, 00:13:33.755 "data_offset": 0, 00:13:33.755 "data_size": 65536 00:13:33.755 }, 00:13:33.755 { 00:13:33.755 "name": "BaseBdev3", 00:13:33.755 "uuid": "fa936cc5-1e21-552b-821a-7922751027d8", 00:13:33.755 "is_configured": true, 00:13:33.755 "data_offset": 0, 00:13:33.755 "data_size": 65536 00:13:33.755 }, 00:13:33.755 { 00:13:33.755 "name": "BaseBdev4", 00:13:33.755 "uuid": "ace7c50a-1399-5458-88a1-3b6544058367", 00:13:33.755 "is_configured": true, 00:13:33.755 "data_offset": 0, 00:13:33.755 "data_size": 65536 00:13:33.755 } 00:13:33.755 ] 00:13:33.755 }' 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.755 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.323 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:34.323 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.323 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.323 [2024-11-28 02:29:07.698170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.323 [2024-11-28 02:29:07.698210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.323 80.25 IOPS, 240.75 MiB/s 00:13:34.323 Latency(us) 00:13:34.323 [2024-11-28T02:29:08.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.323 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:34.323 raid_bdev1 : 8.07 79.84 239.52 0.00 0.00 17628.89 314.80 127294.38 00:13:34.323 [2024-11-28T02:29:08.002Z] =================================================================================================================== 00:13:34.323 [2024-11-28T02:29:08.002Z] Total : 79.84 239.52 0.00 0.00 17628.89 314.80 127294.38 00:13:34.323 [2024-11-28 02:29:07.795080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.323 [2024-11-28 02:29:07.795239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.323 [2024-11-28 02:29:07.795367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.323 [2024-11-28 02:29:07.795422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:34.323 { 00:13:34.323 "results": [ 00:13:34.323 { 00:13:34.323 "job": "raid_bdev1", 00:13:34.323 "core_mask": "0x1", 00:13:34.323 "workload": "randrw", 00:13:34.323 "percentage": 50, 00:13:34.323 "status": "finished", 00:13:34.323 "queue_depth": 2, 00:13:34.323 "io_size": 3145728, 00:13:34.323 "runtime": 8.066204, 00:13:34.324 "iops": 79.83928995596938, 00:13:34.324 "mibps": 239.51786986790813, 00:13:34.324 "io_failed": 0, 00:13:34.324 "io_timeout": 0, 00:13:34.324 "avg_latency_us": 17628.893650492282, 00:13:34.324 "min_latency_us": 314.80174672489085, 00:13:34.324 "max_latency_us": 127294.3790393013 00:13:34.324 } 00:13:34.324 ], 00:13:34.324 "core_count": 1 00:13:34.324 } 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.324 02:29:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:34.583 /dev/nbd0 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.583 1+0 records in 00:13:34.583 1+0 records out 00:13:34.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269286 s, 15.2 MB/s 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.583 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:34.856 /dev/nbd1 00:13:34.856 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:34.856 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:34.856 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:34.856 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:34.856 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.856 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.856 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:34.856 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:34.856 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.857 1+0 records in 00:13:34.857 1+0 records out 00:13:34.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611283 s, 6.7 MB/s 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.857 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.117 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:35.376 /dev/nbd1 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.376 1+0 records in 00:13:35.376 1+0 records out 00:13:35.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557182 s, 7.4 MB/s 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.376 02:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:35.376 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:35.376 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.376 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:35.376 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:35.376 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:35.376 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.376 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.634 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78504 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78504 ']' 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78504 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78504 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78504' 00:13:35.895 killing process with pid 78504 00:13:35.895 Received shutdown signal, test time was about 9.811023 seconds 00:13:35.895 00:13:35.895 Latency(us) 00:13:35.895 [2024-11-28T02:29:09.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.895 [2024-11-28T02:29:09.574Z] =================================================================================================================== 00:13:35.895 [2024-11-28T02:29:09.574Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78504 00:13:35.895 [2024-11-28 02:29:09.515440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.895 02:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78504 00:13:36.462 [2024-11-28 02:29:09.911286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.398 02:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:37.398 00:13:37.398 real 0m13.291s 00:13:37.398 user 0m16.566s 00:13:37.398 sys 0m1.930s 00:13:37.398 02:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.398 02:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.398 ************************************ 00:13:37.398 END TEST raid_rebuild_test_io 00:13:37.398 ************************************ 00:13:37.656 02:29:11 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:37.656 02:29:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:37.656 02:29:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.656 02:29:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.656 ************************************ 00:13:37.656 START TEST raid_rebuild_test_sb_io 00:13:37.656 ************************************ 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78913 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78913 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78913 ']' 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.656 02:29:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.656 [2024-11-28 02:29:11.243327] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:37.656 [2024-11-28 02:29:11.243538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:37.656 Zero copy mechanism will not be used. 00:13:37.656 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78913 ] 00:13:37.916 [2024-11-28 02:29:11.418316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.916 [2024-11-28 02:29:11.529401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.175 [2024-11-28 02:29:11.739649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.175 [2024-11-28 02:29:11.739793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.435 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.435 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:38.435 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.435 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:38.435 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.435 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.435 BaseBdev1_malloc 00:13:38.435 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.435 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:38.435 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.435 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.435 [2024-11-28 02:29:12.109538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:38.435 [2024-11-28 02:29:12.109658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.435 [2024-11-28 02:29:12.109687] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:38.435 [2024-11-28 02:29:12.109701] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.435 [2024-11-28 02:29:12.111862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.435 [2024-11-28 02:29:12.111909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:38.695 BaseBdev1 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.695 BaseBdev2_malloc 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.695 [2024-11-28 02:29:12.165441] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:38.695 [2024-11-28 02:29:12.165521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.695 [2024-11-28 02:29:12.165547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:38.695 [2024-11-28 02:29:12.165560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.695 [2024-11-28 02:29:12.167698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.695 [2024-11-28 02:29:12.167747] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:38.695 BaseBdev2 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.695 BaseBdev3_malloc 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.695 [2024-11-28 02:29:12.238658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:38.695 [2024-11-28 02:29:12.238719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.695 [2024-11-28 02:29:12.238742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:38.695 [2024-11-28 02:29:12.238754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.695 [2024-11-28 02:29:12.240907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.695 [2024-11-28 02:29:12.240962] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:38.695 BaseBdev3 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.695 BaseBdev4_malloc 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.695 [2024-11-28 02:29:12.293661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:38.695 [2024-11-28 02:29:12.293747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.695 [2024-11-28 02:29:12.293770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:38.695 [2024-11-28 02:29:12.293784] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.695 [2024-11-28 02:29:12.295903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.695 [2024-11-28 02:29:12.295956] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:38.695 BaseBdev4 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.695 spare_malloc 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.695 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.696 spare_delay 00:13:38.696 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.696 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:38.696 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.696 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.696 [2024-11-28 02:29:12.363050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:38.696 [2024-11-28 02:29:12.363109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.696 [2024-11-28 02:29:12.363129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:38.696 [2024-11-28 02:29:12.363141] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.696 [2024-11-28 02:29:12.365237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.696 [2024-11-28 02:29:12.365283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:38.696 spare 00:13:38.696 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.696 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:38.696 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.696 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.956 [2024-11-28 02:29:12.375076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.956 [2024-11-28 02:29:12.376871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.956 [2024-11-28 02:29:12.376956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.956 [2024-11-28 02:29:12.377018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.956 [2024-11-28 02:29:12.377223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:38.956 [2024-11-28 02:29:12.377248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.956 [2024-11-28 02:29:12.377512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:38.956 [2024-11-28 02:29:12.377695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:38.956 [2024-11-28 02:29:12.377707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:38.956 [2024-11-28 02:29:12.377872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.956 "name": "raid_bdev1", 00:13:38.956 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:38.956 "strip_size_kb": 0, 00:13:38.956 "state": "online", 00:13:38.956 "raid_level": "raid1", 00:13:38.956 "superblock": true, 00:13:38.956 "num_base_bdevs": 4, 00:13:38.956 "num_base_bdevs_discovered": 4, 00:13:38.956 "num_base_bdevs_operational": 4, 00:13:38.956 "base_bdevs_list": [ 00:13:38.956 { 00:13:38.956 "name": "BaseBdev1", 00:13:38.956 "uuid": "20567454-50e7-5805-aaa9-524c4fe78d34", 00:13:38.956 "is_configured": true, 00:13:38.956 "data_offset": 2048, 00:13:38.956 "data_size": 63488 00:13:38.956 }, 00:13:38.956 { 00:13:38.956 "name": "BaseBdev2", 00:13:38.956 "uuid": "7dbfa5b9-c874-55a4-9618-8bb863a718c3", 00:13:38.956 "is_configured": true, 00:13:38.956 "data_offset": 2048, 00:13:38.956 "data_size": 63488 00:13:38.956 }, 00:13:38.956 { 00:13:38.956 "name": "BaseBdev3", 00:13:38.956 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:38.956 "is_configured": true, 00:13:38.956 "data_offset": 2048, 00:13:38.956 "data_size": 63488 00:13:38.956 }, 00:13:38.956 { 00:13:38.956 "name": "BaseBdev4", 00:13:38.956 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:38.956 "is_configured": true, 00:13:38.956 "data_offset": 2048, 00:13:38.956 "data_size": 63488 00:13:38.956 } 00:13:38.956 ] 00:13:38.956 }' 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.956 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.216 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.216 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.216 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.216 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:39.216 [2024-11-28 02:29:12.874586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.476 [2024-11-28 02:29:12.970043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.476 02:29:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.476 02:29:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.476 "name": "raid_bdev1", 00:13:39.476 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:39.476 "strip_size_kb": 0, 00:13:39.476 "state": "online", 00:13:39.476 "raid_level": "raid1", 00:13:39.476 "superblock": true, 00:13:39.476 "num_base_bdevs": 4, 00:13:39.476 "num_base_bdevs_discovered": 3, 00:13:39.476 "num_base_bdevs_operational": 3, 00:13:39.476 "base_bdevs_list": [ 00:13:39.476 { 00:13:39.476 "name": null, 00:13:39.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.476 "is_configured": false, 00:13:39.476 "data_offset": 0, 00:13:39.476 "data_size": 63488 00:13:39.476 }, 00:13:39.476 { 00:13:39.476 "name": "BaseBdev2", 00:13:39.476 "uuid": "7dbfa5b9-c874-55a4-9618-8bb863a718c3", 00:13:39.476 "is_configured": true, 00:13:39.476 "data_offset": 2048, 00:13:39.476 "data_size": 63488 00:13:39.476 }, 00:13:39.476 { 00:13:39.476 "name": "BaseBdev3", 00:13:39.476 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:39.476 "is_configured": true, 00:13:39.476 "data_offset": 2048, 00:13:39.476 "data_size": 63488 00:13:39.476 }, 00:13:39.476 { 00:13:39.476 "name": "BaseBdev4", 00:13:39.476 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:39.476 "is_configured": true, 00:13:39.476 "data_offset": 2048, 00:13:39.476 "data_size": 63488 00:13:39.476 } 00:13:39.476 ] 00:13:39.476 }' 00:13:39.476 02:29:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.476 02:29:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.476 [2024-11-28 02:29:13.070058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:39.476 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:39.476 Zero copy mechanism will not be used. 00:13:39.476 Running I/O for 60 seconds... 00:13:40.046 02:29:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.046 02:29:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.046 02:29:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.046 [2024-11-28 02:29:13.437268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.046 02:29:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.046 02:29:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:40.046 [2024-11-28 02:29:13.505614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:40.046 [2024-11-28 02:29:13.507758] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.046 [2024-11-28 02:29:13.616077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:40.046 [2024-11-28 02:29:13.616792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:40.305 [2024-11-28 02:29:13.725717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:40.305 [2024-11-28 02:29:13.726622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:40.565 [2024-11-28 02:29:14.054025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:40.565 [2024-11-28 02:29:14.055478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:40.825 136.00 IOPS, 408.00 MiB/s [2024-11-28T02:29:14.504Z] [2024-11-28 02:29:14.289572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:40.825 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.825 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.825 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.825 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.825 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.825 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.825 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.825 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.825 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.085 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.085 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.085 "name": "raid_bdev1", 00:13:41.085 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:41.085 "strip_size_kb": 0, 00:13:41.085 "state": "online", 00:13:41.085 "raid_level": "raid1", 00:13:41.085 "superblock": true, 00:13:41.085 "num_base_bdevs": 4, 00:13:41.085 "num_base_bdevs_discovered": 4, 00:13:41.085 "num_base_bdevs_operational": 4, 00:13:41.085 "process": { 00:13:41.085 "type": "rebuild", 00:13:41.085 "target": "spare", 00:13:41.085 "progress": { 00:13:41.085 "blocks": 12288, 00:13:41.085 "percent": 19 00:13:41.085 } 00:13:41.085 }, 00:13:41.085 "base_bdevs_list": [ 00:13:41.085 { 00:13:41.085 "name": "spare", 00:13:41.085 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:41.085 "is_configured": true, 00:13:41.085 "data_offset": 2048, 00:13:41.085 "data_size": 63488 00:13:41.085 }, 00:13:41.085 { 00:13:41.085 "name": "BaseBdev2", 00:13:41.085 "uuid": "7dbfa5b9-c874-55a4-9618-8bb863a718c3", 00:13:41.085 "is_configured": true, 00:13:41.085 "data_offset": 2048, 00:13:41.085 "data_size": 63488 00:13:41.085 }, 00:13:41.085 { 00:13:41.085 "name": "BaseBdev3", 00:13:41.085 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:41.085 "is_configured": true, 00:13:41.085 "data_offset": 2048, 00:13:41.085 "data_size": 63488 00:13:41.085 }, 00:13:41.085 { 00:13:41.085 "name": "BaseBdev4", 00:13:41.085 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:41.085 "is_configured": true, 00:13:41.085 "data_offset": 2048, 00:13:41.085 "data_size": 63488 00:13:41.085 } 00:13:41.085 ] 00:13:41.085 }' 00:13:41.085 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.085 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.085 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.085 [2024-11-28 02:29:14.614974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:41.085 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.085 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:41.085 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.085 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.085 [2024-11-28 02:29:14.651828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.345 [2024-11-28 02:29:14.809777] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:41.345 [2024-11-28 02:29:14.820062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.345 [2024-11-28 02:29:14.820112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.345 [2024-11-28 02:29:14.820130] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:41.345 [2024-11-28 02:29:14.850984] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.345 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.346 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.346 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.346 "name": "raid_bdev1", 00:13:41.346 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:41.346 "strip_size_kb": 0, 00:13:41.346 "state": "online", 00:13:41.346 "raid_level": "raid1", 00:13:41.346 "superblock": true, 00:13:41.346 "num_base_bdevs": 4, 00:13:41.346 "num_base_bdevs_discovered": 3, 00:13:41.346 "num_base_bdevs_operational": 3, 00:13:41.346 "base_bdevs_list": [ 00:13:41.346 { 00:13:41.346 "name": null, 00:13:41.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.346 "is_configured": false, 00:13:41.346 "data_offset": 0, 00:13:41.346 "data_size": 63488 00:13:41.346 }, 00:13:41.346 { 00:13:41.346 "name": "BaseBdev2", 00:13:41.346 "uuid": "7dbfa5b9-c874-55a4-9618-8bb863a718c3", 00:13:41.346 "is_configured": true, 00:13:41.346 "data_offset": 2048, 00:13:41.346 "data_size": 63488 00:13:41.346 }, 00:13:41.346 { 00:13:41.346 "name": "BaseBdev3", 00:13:41.346 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:41.346 "is_configured": true, 00:13:41.346 "data_offset": 2048, 00:13:41.346 "data_size": 63488 00:13:41.346 }, 00:13:41.346 { 00:13:41.346 "name": "BaseBdev4", 00:13:41.346 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:41.346 "is_configured": true, 00:13:41.346 "data_offset": 2048, 00:13:41.346 "data_size": 63488 00:13:41.346 } 00:13:41.346 ] 00:13:41.346 }' 00:13:41.346 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.346 02:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.865 127.00 IOPS, 381.00 MiB/s [2024-11-28T02:29:15.544Z] 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.865 "name": "raid_bdev1", 00:13:41.865 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:41.865 "strip_size_kb": 0, 00:13:41.865 "state": "online", 00:13:41.865 "raid_level": "raid1", 00:13:41.865 "superblock": true, 00:13:41.865 "num_base_bdevs": 4, 00:13:41.865 "num_base_bdevs_discovered": 3, 00:13:41.865 "num_base_bdevs_operational": 3, 00:13:41.865 "base_bdevs_list": [ 00:13:41.865 { 00:13:41.865 "name": null, 00:13:41.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.865 "is_configured": false, 00:13:41.865 "data_offset": 0, 00:13:41.865 "data_size": 63488 00:13:41.865 }, 00:13:41.865 { 00:13:41.865 "name": "BaseBdev2", 00:13:41.865 "uuid": "7dbfa5b9-c874-55a4-9618-8bb863a718c3", 00:13:41.865 "is_configured": true, 00:13:41.865 "data_offset": 2048, 00:13:41.865 "data_size": 63488 00:13:41.865 }, 00:13:41.865 { 00:13:41.865 "name": "BaseBdev3", 00:13:41.865 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:41.865 "is_configured": true, 00:13:41.865 "data_offset": 2048, 00:13:41.865 "data_size": 63488 00:13:41.865 }, 00:13:41.865 { 00:13:41.865 "name": "BaseBdev4", 00:13:41.865 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:41.865 "is_configured": true, 00:13:41.865 "data_offset": 2048, 00:13:41.865 "data_size": 63488 00:13:41.865 } 00:13:41.865 ] 00:13:41.865 }' 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.865 [2024-11-28 02:29:15.468113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.865 02:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:41.865 [2024-11-28 02:29:15.533211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:41.865 [2024-11-28 02:29:15.535205] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.124 [2024-11-28 02:29:15.644480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.124 [2024-11-28 02:29:15.645826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.384 [2024-11-28 02:29:15.877433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.384 [2024-11-28 02:29:15.878255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.643 132.00 IOPS, 396.00 MiB/s [2024-11-28T02:29:16.322Z] [2024-11-28 02:29:16.235167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:42.643 [2024-11-28 02:29:16.236653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:42.903 [2024-11-28 02:29:16.456026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:42.903 [2024-11-28 02:29:16.456811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.903 "name": "raid_bdev1", 00:13:42.903 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:42.903 "strip_size_kb": 0, 00:13:42.903 "state": "online", 00:13:42.903 "raid_level": "raid1", 00:13:42.903 "superblock": true, 00:13:42.903 "num_base_bdevs": 4, 00:13:42.903 "num_base_bdevs_discovered": 4, 00:13:42.903 "num_base_bdevs_operational": 4, 00:13:42.903 "process": { 00:13:42.903 "type": "rebuild", 00:13:42.903 "target": "spare", 00:13:42.903 "progress": { 00:13:42.903 "blocks": 10240, 00:13:42.903 "percent": 16 00:13:42.903 } 00:13:42.903 }, 00:13:42.903 "base_bdevs_list": [ 00:13:42.903 { 00:13:42.903 "name": "spare", 00:13:42.903 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:42.903 "is_configured": true, 00:13:42.903 "data_offset": 2048, 00:13:42.903 "data_size": 63488 00:13:42.903 }, 00:13:42.903 { 00:13:42.903 "name": "BaseBdev2", 00:13:42.903 "uuid": "7dbfa5b9-c874-55a4-9618-8bb863a718c3", 00:13:42.903 "is_configured": true, 00:13:42.903 "data_offset": 2048, 00:13:42.903 "data_size": 63488 00:13:42.903 }, 00:13:42.903 { 00:13:42.903 "name": "BaseBdev3", 00:13:42.903 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:42.903 "is_configured": true, 00:13:42.903 "data_offset": 2048, 00:13:42.903 "data_size": 63488 00:13:42.903 }, 00:13:42.903 { 00:13:42.903 "name": "BaseBdev4", 00:13:42.903 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:42.903 "is_configured": true, 00:13:42.903 "data_offset": 2048, 00:13:42.903 "data_size": 63488 00:13:42.903 } 00:13:42.903 ] 00:13:42.903 }' 00:13:42.903 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:43.163 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.163 [2024-11-28 02:29:16.646436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.163 [2024-11-28 02:29:16.795185] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:43.163 [2024-11-28 02:29:16.795263] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.163 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.424 "name": "raid_bdev1", 00:13:43.424 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:43.424 "strip_size_kb": 0, 00:13:43.424 "state": "online", 00:13:43.424 "raid_level": "raid1", 00:13:43.424 "superblock": true, 00:13:43.424 "num_base_bdevs": 4, 00:13:43.424 "num_base_bdevs_discovered": 3, 00:13:43.424 "num_base_bdevs_operational": 3, 00:13:43.424 "process": { 00:13:43.424 "type": "rebuild", 00:13:43.424 "target": "spare", 00:13:43.424 "progress": { 00:13:43.424 "blocks": 12288, 00:13:43.424 "percent": 19 00:13:43.424 } 00:13:43.424 }, 00:13:43.424 "base_bdevs_list": [ 00:13:43.424 { 00:13:43.424 "name": "spare", 00:13:43.424 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:43.424 "is_configured": true, 00:13:43.424 "data_offset": 2048, 00:13:43.424 "data_size": 63488 00:13:43.424 }, 00:13:43.424 { 00:13:43.424 "name": null, 00:13:43.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.424 "is_configured": false, 00:13:43.424 "data_offset": 0, 00:13:43.424 "data_size": 63488 00:13:43.424 }, 00:13:43.424 { 00:13:43.424 "name": "BaseBdev3", 00:13:43.424 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:43.424 "is_configured": true, 00:13:43.424 "data_offset": 2048, 00:13:43.424 "data_size": 63488 00:13:43.424 }, 00:13:43.424 { 00:13:43.424 "name": "BaseBdev4", 00:13:43.424 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:43.424 "is_configured": true, 00:13:43.424 "data_offset": 2048, 00:13:43.424 "data_size": 63488 00:13:43.424 } 00:13:43.424 ] 00:13:43.424 }' 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.424 [2024-11-28 02:29:16.920066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.424 "name": "raid_bdev1", 00:13:43.424 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:43.424 "strip_size_kb": 0, 00:13:43.424 "state": "online", 00:13:43.424 "raid_level": "raid1", 00:13:43.424 "superblock": true, 00:13:43.424 "num_base_bdevs": 4, 00:13:43.424 "num_base_bdevs_discovered": 3, 00:13:43.424 "num_base_bdevs_operational": 3, 00:13:43.424 "process": { 00:13:43.424 "type": "rebuild", 00:13:43.424 "target": "spare", 00:13:43.424 "progress": { 00:13:43.424 "blocks": 14336, 00:13:43.424 "percent": 22 00:13:43.424 } 00:13:43.424 }, 00:13:43.424 "base_bdevs_list": [ 00:13:43.424 { 00:13:43.424 "name": "spare", 00:13:43.424 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:43.424 "is_configured": true, 00:13:43.424 "data_offset": 2048, 00:13:43.424 "data_size": 63488 00:13:43.424 }, 00:13:43.424 { 00:13:43.424 "name": null, 00:13:43.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.424 "is_configured": false, 00:13:43.424 "data_offset": 0, 00:13:43.424 "data_size": 63488 00:13:43.424 }, 00:13:43.424 { 00:13:43.424 "name": "BaseBdev3", 00:13:43.424 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:43.424 "is_configured": true, 00:13:43.424 "data_offset": 2048, 00:13:43.424 "data_size": 63488 00:13:43.424 }, 00:13:43.424 { 00:13:43.424 "name": "BaseBdev4", 00:13:43.424 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:43.424 "is_configured": true, 00:13:43.424 "data_offset": 2048, 00:13:43.424 "data_size": 63488 00:13:43.424 } 00:13:43.424 ] 00:13:43.424 }' 00:13:43.424 02:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.424 02:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.424 02:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.424 [2024-11-28 02:29:17.022033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:43.424 [2024-11-28 02:29:17.022418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:43.424 02:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.424 02:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.995 126.25 IOPS, 378.75 MiB/s [2024-11-28T02:29:17.674Z] [2024-11-28 02:29:17.387244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:43.995 [2024-11-28 02:29:17.387461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:43.995 [2024-11-28 02:29:17.656315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.573 115.40 IOPS, 346.20 MiB/s [2024-11-28T02:29:18.252Z] 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.573 [2024-11-28 02:29:18.089025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:44.573 [2024-11-28 02:29:18.089974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.573 "name": "raid_bdev1", 00:13:44.573 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:44.573 "strip_size_kb": 0, 00:13:44.573 "state": "online", 00:13:44.573 "raid_level": "raid1", 00:13:44.573 "superblock": true, 00:13:44.573 "num_base_bdevs": 4, 00:13:44.573 "num_base_bdevs_discovered": 3, 00:13:44.573 "num_base_bdevs_operational": 3, 00:13:44.573 "process": { 00:13:44.573 "type": "rebuild", 00:13:44.573 "target": "spare", 00:13:44.573 "progress": { 00:13:44.573 "blocks": 30720, 00:13:44.573 "percent": 48 00:13:44.573 } 00:13:44.573 }, 00:13:44.573 "base_bdevs_list": [ 00:13:44.573 { 00:13:44.573 "name": "spare", 00:13:44.573 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:44.573 "is_configured": true, 00:13:44.573 "data_offset": 2048, 00:13:44.573 "data_size": 63488 00:13:44.573 }, 00:13:44.573 { 00:13:44.573 "name": null, 00:13:44.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.573 "is_configured": false, 00:13:44.573 "data_offset": 0, 00:13:44.573 "data_size": 63488 00:13:44.573 }, 00:13:44.573 { 00:13:44.573 "name": "BaseBdev3", 00:13:44.573 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:44.573 "is_configured": true, 00:13:44.573 "data_offset": 2048, 00:13:44.573 "data_size": 63488 00:13:44.573 }, 00:13:44.573 { 00:13:44.573 "name": "BaseBdev4", 00:13:44.573 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:44.573 "is_configured": true, 00:13:44.573 "data_offset": 2048, 00:13:44.573 "data_size": 63488 00:13:44.573 } 00:13:44.573 ] 00:13:44.573 }' 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.573 02:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:44.848 [2024-11-28 02:29:18.306309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:45.107 [2024-11-28 02:29:18.662352] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:45.677 103.67 IOPS, 311.00 MiB/s [2024-11-28T02:29:19.356Z] 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.677 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.677 "name": "raid_bdev1", 00:13:45.677 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:45.677 "strip_size_kb": 0, 00:13:45.677 "state": "online", 00:13:45.677 "raid_level": "raid1", 00:13:45.677 "superblock": true, 00:13:45.677 "num_base_bdevs": 4, 00:13:45.677 "num_base_bdevs_discovered": 3, 00:13:45.677 "num_base_bdevs_operational": 3, 00:13:45.677 "process": { 00:13:45.677 "type": "rebuild", 00:13:45.677 "target": "spare", 00:13:45.677 "progress": { 00:13:45.677 "blocks": 45056, 00:13:45.677 "percent": 70 00:13:45.677 } 00:13:45.677 }, 00:13:45.677 "base_bdevs_list": [ 00:13:45.677 { 00:13:45.677 "name": "spare", 00:13:45.677 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:45.677 "is_configured": true, 00:13:45.677 "data_offset": 2048, 00:13:45.678 "data_size": 63488 00:13:45.678 }, 00:13:45.678 { 00:13:45.678 "name": null, 00:13:45.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.678 "is_configured": false, 00:13:45.678 "data_offset": 0, 00:13:45.678 "data_size": 63488 00:13:45.678 }, 00:13:45.678 { 00:13:45.678 "name": "BaseBdev3", 00:13:45.678 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:45.678 "is_configured": true, 00:13:45.678 "data_offset": 2048, 00:13:45.678 "data_size": 63488 00:13:45.678 }, 00:13:45.678 { 00:13:45.678 "name": "BaseBdev4", 00:13:45.678 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:45.678 "is_configured": true, 00:13:45.678 "data_offset": 2048, 00:13:45.678 "data_size": 63488 00:13:45.678 } 00:13:45.678 ] 00:13:45.678 }' 00:13:45.678 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.678 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.678 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.678 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.678 02:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.248 [2024-11-28 02:29:19.755398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:46.508 [2024-11-28 02:29:19.965824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:46.768 96.00 IOPS, 288.00 MiB/s [2024-11-28T02:29:20.447Z] [2024-11-28 02:29:20.286159] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.768 [2024-11-28 02:29:20.385933] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:46.768 [2024-11-28 02:29:20.394542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.768 "name": "raid_bdev1", 00:13:46.768 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:46.768 "strip_size_kb": 0, 00:13:46.768 "state": "online", 00:13:46.768 "raid_level": "raid1", 00:13:46.768 "superblock": true, 00:13:46.768 "num_base_bdevs": 4, 00:13:46.768 "num_base_bdevs_discovered": 3, 00:13:46.768 "num_base_bdevs_operational": 3, 00:13:46.768 "process": { 00:13:46.768 "type": "rebuild", 00:13:46.768 "target": "spare", 00:13:46.768 "progress": { 00:13:46.768 "blocks": 63488, 00:13:46.768 "percent": 100 00:13:46.768 } 00:13:46.768 }, 00:13:46.768 "base_bdevs_list": [ 00:13:46.768 { 00:13:46.768 "name": "spare", 00:13:46.768 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:46.768 "is_configured": true, 00:13:46.768 "data_offset": 2048, 00:13:46.768 "data_size": 63488 00:13:46.768 }, 00:13:46.768 { 00:13:46.768 "name": null, 00:13:46.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.768 "is_configured": false, 00:13:46.768 "data_offset": 0, 00:13:46.768 "data_size": 63488 00:13:46.768 }, 00:13:46.768 { 00:13:46.768 "name": "BaseBdev3", 00:13:46.768 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:46.768 "is_configured": true, 00:13:46.768 "data_offset": 2048, 00:13:46.768 "data_size": 63488 00:13:46.768 }, 00:13:46.768 { 00:13:46.768 "name": "BaseBdev4", 00:13:46.768 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:46.768 "is_configured": true, 00:13:46.768 "data_offset": 2048, 00:13:46.768 "data_size": 63488 00:13:46.768 } 00:13:46.768 ] 00:13:46.768 }' 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.768 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.028 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.028 02:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.858 88.75 IOPS, 266.25 MiB/s [2024-11-28T02:29:21.537Z] 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.858 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.858 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.858 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.858 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.858 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.858 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.858 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.858 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.858 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.858 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.119 "name": "raid_bdev1", 00:13:48.119 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:48.119 "strip_size_kb": 0, 00:13:48.119 "state": "online", 00:13:48.119 "raid_level": "raid1", 00:13:48.119 "superblock": true, 00:13:48.119 "num_base_bdevs": 4, 00:13:48.119 "num_base_bdevs_discovered": 3, 00:13:48.119 "num_base_bdevs_operational": 3, 00:13:48.119 "base_bdevs_list": [ 00:13:48.119 { 00:13:48.119 "name": "spare", 00:13:48.119 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:48.119 "is_configured": true, 00:13:48.119 "data_offset": 2048, 00:13:48.119 "data_size": 63488 00:13:48.119 }, 00:13:48.119 { 00:13:48.119 "name": null, 00:13:48.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.119 "is_configured": false, 00:13:48.119 "data_offset": 0, 00:13:48.119 "data_size": 63488 00:13:48.119 }, 00:13:48.119 { 00:13:48.119 "name": "BaseBdev3", 00:13:48.119 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:48.119 "is_configured": true, 00:13:48.119 "data_offset": 2048, 00:13:48.119 "data_size": 63488 00:13:48.119 }, 00:13:48.119 { 00:13:48.119 "name": "BaseBdev4", 00:13:48.119 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:48.119 "is_configured": true, 00:13:48.119 "data_offset": 2048, 00:13:48.119 "data_size": 63488 00:13:48.119 } 00:13:48.119 ] 00:13:48.119 }' 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.119 "name": "raid_bdev1", 00:13:48.119 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:48.119 "strip_size_kb": 0, 00:13:48.119 "state": "online", 00:13:48.119 "raid_level": "raid1", 00:13:48.119 "superblock": true, 00:13:48.119 "num_base_bdevs": 4, 00:13:48.119 "num_base_bdevs_discovered": 3, 00:13:48.119 "num_base_bdevs_operational": 3, 00:13:48.119 "base_bdevs_list": [ 00:13:48.119 { 00:13:48.119 "name": "spare", 00:13:48.119 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:48.119 "is_configured": true, 00:13:48.119 "data_offset": 2048, 00:13:48.119 "data_size": 63488 00:13:48.119 }, 00:13:48.119 { 00:13:48.119 "name": null, 00:13:48.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.119 "is_configured": false, 00:13:48.119 "data_offset": 0, 00:13:48.119 "data_size": 63488 00:13:48.119 }, 00:13:48.119 { 00:13:48.119 "name": "BaseBdev3", 00:13:48.119 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:48.119 "is_configured": true, 00:13:48.119 "data_offset": 2048, 00:13:48.119 "data_size": 63488 00:13:48.119 }, 00:13:48.119 { 00:13:48.119 "name": "BaseBdev4", 00:13:48.119 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:48.119 "is_configured": true, 00:13:48.119 "data_offset": 2048, 00:13:48.119 "data_size": 63488 00:13:48.119 } 00:13:48.119 ] 00:13:48.119 }' 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.119 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.120 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.380 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.380 "name": "raid_bdev1", 00:13:48.380 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:48.380 "strip_size_kb": 0, 00:13:48.380 "state": "online", 00:13:48.380 "raid_level": "raid1", 00:13:48.380 "superblock": true, 00:13:48.380 "num_base_bdevs": 4, 00:13:48.380 "num_base_bdevs_discovered": 3, 00:13:48.380 "num_base_bdevs_operational": 3, 00:13:48.380 "base_bdevs_list": [ 00:13:48.380 { 00:13:48.380 "name": "spare", 00:13:48.380 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:48.380 "is_configured": true, 00:13:48.380 "data_offset": 2048, 00:13:48.380 "data_size": 63488 00:13:48.380 }, 00:13:48.380 { 00:13:48.380 "name": null, 00:13:48.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.380 "is_configured": false, 00:13:48.380 "data_offset": 0, 00:13:48.380 "data_size": 63488 00:13:48.380 }, 00:13:48.380 { 00:13:48.380 "name": "BaseBdev3", 00:13:48.380 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:48.380 "is_configured": true, 00:13:48.380 "data_offset": 2048, 00:13:48.380 "data_size": 63488 00:13:48.380 }, 00:13:48.380 { 00:13:48.380 "name": "BaseBdev4", 00:13:48.380 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:48.380 "is_configured": true, 00:13:48.380 "data_offset": 2048, 00:13:48.380 "data_size": 63488 00:13:48.380 } 00:13:48.380 ] 00:13:48.380 }' 00:13:48.380 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.380 02:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.641 81.56 IOPS, 244.67 MiB/s [2024-11-28T02:29:22.320Z] 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.641 [2024-11-28 02:29:22.203437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:48.641 [2024-11-28 02:29:22.203535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.641 00:13:48.641 Latency(us) 00:13:48.641 [2024-11-28T02:29:22.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.641 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:48.641 raid_bdev1 : 9.18 80.42 241.26 0.00 0.00 18030.80 350.57 110810.21 00:13:48.641 [2024-11-28T02:29:22.320Z] =================================================================================================================== 00:13:48.641 [2024-11-28T02:29:22.320Z] Total : 80.42 241.26 0.00 0.00 18030.80 350.57 110810.21 00:13:48.641 [2024-11-28 02:29:22.251750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.641 [2024-11-28 02:29:22.251870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.641 [2024-11-28 02:29:22.252020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.641 [2024-11-28 02:29:22.252082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:48.641 { 00:13:48.641 "results": [ 00:13:48.641 { 00:13:48.641 "job": "raid_bdev1", 00:13:48.641 "core_mask": "0x1", 00:13:48.641 "workload": "randrw", 00:13:48.641 "percentage": 50, 00:13:48.641 "status": "finished", 00:13:48.641 "queue_depth": 2, 00:13:48.641 "io_size": 3145728, 00:13:48.641 "runtime": 9.176664, 00:13:48.641 "iops": 80.42138188779714, 00:13:48.641 "mibps": 241.26414566339142, 00:13:48.641 "io_failed": 0, 00:13:48.641 "io_timeout": 0, 00:13:48.641 "avg_latency_us": 18030.803971550635, 00:13:48.641 "min_latency_us": 350.57467248908296, 00:13:48.641 "max_latency_us": 110810.21484716157 00:13:48.641 } 00:13:48.641 ], 00:13:48.641 "core_count": 1 00:13:48.641 } 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.641 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:48.642 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.642 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.642 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:48.902 /dev/nbd0 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.902 1+0 records in 00:13:48.902 1+0 records out 00:13:48.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210472 s, 19.5 MB/s 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.902 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:49.162 /dev/nbd1 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.162 1+0 records in 00:13:49.162 1+0 records out 00:13:49.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436343 s, 9.4 MB/s 00:13:49.162 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.422 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:49.422 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.422 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:49.422 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:49.422 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.422 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.422 02:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:49.422 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:49.422 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.422 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:49.422 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:49.422 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:49.422 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.422 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.682 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:49.943 /dev/nbd1 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.943 1+0 records in 00:13:49.943 1+0 records out 00:13:49.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277227 s, 14.8 MB/s 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.943 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.203 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.463 02:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.463 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.463 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:50.463 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.464 [2024-11-28 02:29:24.016996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:50.464 [2024-11-28 02:29:24.017059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.464 [2024-11-28 02:29:24.017080] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:50.464 [2024-11-28 02:29:24.017092] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.464 [2024-11-28 02:29:24.019304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.464 [2024-11-28 02:29:24.019408] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:50.464 [2024-11-28 02:29:24.019513] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:50.464 [2024-11-28 02:29:24.019575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:50.464 [2024-11-28 02:29:24.019736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.464 [2024-11-28 02:29:24.019851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:50.464 spare 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.464 [2024-11-28 02:29:24.119763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:50.464 [2024-11-28 02:29:24.119799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.464 [2024-11-28 02:29:24.120148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:50.464 [2024-11-28 02:29:24.120349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:50.464 [2024-11-28 02:29:24.120367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:50.464 [2024-11-28 02:29:24.120564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.464 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.725 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.725 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.725 "name": "raid_bdev1", 00:13:50.725 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:50.725 "strip_size_kb": 0, 00:13:50.725 "state": "online", 00:13:50.725 "raid_level": "raid1", 00:13:50.725 "superblock": true, 00:13:50.725 "num_base_bdevs": 4, 00:13:50.725 "num_base_bdevs_discovered": 3, 00:13:50.725 "num_base_bdevs_operational": 3, 00:13:50.725 "base_bdevs_list": [ 00:13:50.725 { 00:13:50.725 "name": "spare", 00:13:50.725 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:50.725 "is_configured": true, 00:13:50.725 "data_offset": 2048, 00:13:50.725 "data_size": 63488 00:13:50.725 }, 00:13:50.725 { 00:13:50.725 "name": null, 00:13:50.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.725 "is_configured": false, 00:13:50.725 "data_offset": 2048, 00:13:50.725 "data_size": 63488 00:13:50.725 }, 00:13:50.725 { 00:13:50.725 "name": "BaseBdev3", 00:13:50.725 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:50.725 "is_configured": true, 00:13:50.725 "data_offset": 2048, 00:13:50.725 "data_size": 63488 00:13:50.725 }, 00:13:50.725 { 00:13:50.725 "name": "BaseBdev4", 00:13:50.725 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:50.725 "is_configured": true, 00:13:50.725 "data_offset": 2048, 00:13:50.725 "data_size": 63488 00:13:50.725 } 00:13:50.725 ] 00:13:50.725 }' 00:13:50.725 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.725 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.985 "name": "raid_bdev1", 00:13:50.985 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:50.985 "strip_size_kb": 0, 00:13:50.985 "state": "online", 00:13:50.985 "raid_level": "raid1", 00:13:50.985 "superblock": true, 00:13:50.985 "num_base_bdevs": 4, 00:13:50.985 "num_base_bdevs_discovered": 3, 00:13:50.985 "num_base_bdevs_operational": 3, 00:13:50.985 "base_bdevs_list": [ 00:13:50.985 { 00:13:50.985 "name": "spare", 00:13:50.985 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:50.985 "is_configured": true, 00:13:50.985 "data_offset": 2048, 00:13:50.985 "data_size": 63488 00:13:50.985 }, 00:13:50.985 { 00:13:50.985 "name": null, 00:13:50.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.985 "is_configured": false, 00:13:50.985 "data_offset": 2048, 00:13:50.985 "data_size": 63488 00:13:50.985 }, 00:13:50.985 { 00:13:50.985 "name": "BaseBdev3", 00:13:50.985 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:50.985 "is_configured": true, 00:13:50.985 "data_offset": 2048, 00:13:50.985 "data_size": 63488 00:13:50.985 }, 00:13:50.985 { 00:13:50.985 "name": "BaseBdev4", 00:13:50.985 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:50.985 "is_configured": true, 00:13:50.985 "data_offset": 2048, 00:13:50.985 "data_size": 63488 00:13:50.985 } 00:13:50.985 ] 00:13:50.985 }' 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.985 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.245 [2024-11-28 02:29:24.751971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.245 "name": "raid_bdev1", 00:13:51.245 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:51.245 "strip_size_kb": 0, 00:13:51.245 "state": "online", 00:13:51.245 "raid_level": "raid1", 00:13:51.245 "superblock": true, 00:13:51.245 "num_base_bdevs": 4, 00:13:51.245 "num_base_bdevs_discovered": 2, 00:13:51.245 "num_base_bdevs_operational": 2, 00:13:51.245 "base_bdevs_list": [ 00:13:51.245 { 00:13:51.245 "name": null, 00:13:51.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.245 "is_configured": false, 00:13:51.245 "data_offset": 0, 00:13:51.245 "data_size": 63488 00:13:51.245 }, 00:13:51.245 { 00:13:51.245 "name": null, 00:13:51.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.245 "is_configured": false, 00:13:51.245 "data_offset": 2048, 00:13:51.245 "data_size": 63488 00:13:51.245 }, 00:13:51.245 { 00:13:51.245 "name": "BaseBdev3", 00:13:51.245 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:51.245 "is_configured": true, 00:13:51.245 "data_offset": 2048, 00:13:51.245 "data_size": 63488 00:13:51.245 }, 00:13:51.245 { 00:13:51.245 "name": "BaseBdev4", 00:13:51.245 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:51.245 "is_configured": true, 00:13:51.245 "data_offset": 2048, 00:13:51.245 "data_size": 63488 00:13:51.245 } 00:13:51.245 ] 00:13:51.245 }' 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.245 02:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.813 02:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.813 02:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.813 02:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.813 [2024-11-28 02:29:25.195474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.813 [2024-11-28 02:29:25.195766] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:51.813 [2024-11-28 02:29:25.195857] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:51.813 [2024-11-28 02:29:25.195942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.813 [2024-11-28 02:29:25.210524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:13:51.813 02:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.813 02:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:51.813 [2024-11-28 02:29:25.212366] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.752 "name": "raid_bdev1", 00:13:52.752 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:52.752 "strip_size_kb": 0, 00:13:52.752 "state": "online", 00:13:52.752 "raid_level": "raid1", 00:13:52.752 "superblock": true, 00:13:52.752 "num_base_bdevs": 4, 00:13:52.752 "num_base_bdevs_discovered": 3, 00:13:52.752 "num_base_bdevs_operational": 3, 00:13:52.752 "process": { 00:13:52.752 "type": "rebuild", 00:13:52.752 "target": "spare", 00:13:52.752 "progress": { 00:13:52.752 "blocks": 20480, 00:13:52.752 "percent": 32 00:13:52.752 } 00:13:52.752 }, 00:13:52.752 "base_bdevs_list": [ 00:13:52.752 { 00:13:52.752 "name": "spare", 00:13:52.752 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:52.752 "is_configured": true, 00:13:52.752 "data_offset": 2048, 00:13:52.752 "data_size": 63488 00:13:52.752 }, 00:13:52.752 { 00:13:52.752 "name": null, 00:13:52.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.752 "is_configured": false, 00:13:52.752 "data_offset": 2048, 00:13:52.752 "data_size": 63488 00:13:52.752 }, 00:13:52.752 { 00:13:52.752 "name": "BaseBdev3", 00:13:52.752 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:52.752 "is_configured": true, 00:13:52.752 "data_offset": 2048, 00:13:52.752 "data_size": 63488 00:13:52.752 }, 00:13:52.752 { 00:13:52.752 "name": "BaseBdev4", 00:13:52.752 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:52.752 "is_configured": true, 00:13:52.752 "data_offset": 2048, 00:13:52.752 "data_size": 63488 00:13:52.752 } 00:13:52.752 ] 00:13:52.752 }' 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.752 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.752 [2024-11-28 02:29:26.356055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.752 [2024-11-28 02:29:26.417277] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:52.752 [2024-11-28 02:29:26.417410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.752 [2024-11-28 02:29:26.417429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.752 [2024-11-28 02:29:26.417440] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.013 "name": "raid_bdev1", 00:13:53.013 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:53.013 "strip_size_kb": 0, 00:13:53.013 "state": "online", 00:13:53.013 "raid_level": "raid1", 00:13:53.013 "superblock": true, 00:13:53.013 "num_base_bdevs": 4, 00:13:53.013 "num_base_bdevs_discovered": 2, 00:13:53.013 "num_base_bdevs_operational": 2, 00:13:53.013 "base_bdevs_list": [ 00:13:53.013 { 00:13:53.013 "name": null, 00:13:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.013 "is_configured": false, 00:13:53.013 "data_offset": 0, 00:13:53.013 "data_size": 63488 00:13:53.013 }, 00:13:53.013 { 00:13:53.013 "name": null, 00:13:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.013 "is_configured": false, 00:13:53.013 "data_offset": 2048, 00:13:53.013 "data_size": 63488 00:13:53.013 }, 00:13:53.013 { 00:13:53.013 "name": "BaseBdev3", 00:13:53.013 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:53.013 "is_configured": true, 00:13:53.013 "data_offset": 2048, 00:13:53.013 "data_size": 63488 00:13:53.013 }, 00:13:53.013 { 00:13:53.013 "name": "BaseBdev4", 00:13:53.013 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:53.013 "is_configured": true, 00:13:53.013 "data_offset": 2048, 00:13:53.013 "data_size": 63488 00:13:53.013 } 00:13:53.013 ] 00:13:53.013 }' 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.013 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.273 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:53.273 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.273 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.273 [2024-11-28 02:29:26.897041] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:53.273 [2024-11-28 02:29:26.897165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.273 [2024-11-28 02:29:26.897216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:53.273 [2024-11-28 02:29:26.897281] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.273 [2024-11-28 02:29:26.897791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.273 [2024-11-28 02:29:26.897867] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:53.273 [2024-11-28 02:29:26.898017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:53.273 [2024-11-28 02:29:26.898072] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:53.273 [2024-11-28 02:29:26.898122] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:53.273 [2024-11-28 02:29:26.898183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.273 [2024-11-28 02:29:26.912666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:13:53.273 spare 00:13:53.273 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.273 [2024-11-28 02:29:26.914549] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.273 02:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.656 "name": "raid_bdev1", 00:13:54.656 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:54.656 "strip_size_kb": 0, 00:13:54.656 "state": "online", 00:13:54.656 "raid_level": "raid1", 00:13:54.656 "superblock": true, 00:13:54.656 "num_base_bdevs": 4, 00:13:54.656 "num_base_bdevs_discovered": 3, 00:13:54.656 "num_base_bdevs_operational": 3, 00:13:54.656 "process": { 00:13:54.656 "type": "rebuild", 00:13:54.656 "target": "spare", 00:13:54.656 "progress": { 00:13:54.656 "blocks": 20480, 00:13:54.656 "percent": 32 00:13:54.656 } 00:13:54.656 }, 00:13:54.656 "base_bdevs_list": [ 00:13:54.656 { 00:13:54.656 "name": "spare", 00:13:54.656 "uuid": "64a7b615-01d3-5db5-889a-826e6761fbc7", 00:13:54.656 "is_configured": true, 00:13:54.656 "data_offset": 2048, 00:13:54.656 "data_size": 63488 00:13:54.656 }, 00:13:54.656 { 00:13:54.656 "name": null, 00:13:54.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.656 "is_configured": false, 00:13:54.656 "data_offset": 2048, 00:13:54.656 "data_size": 63488 00:13:54.656 }, 00:13:54.656 { 00:13:54.656 "name": "BaseBdev3", 00:13:54.656 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:54.656 "is_configured": true, 00:13:54.656 "data_offset": 2048, 00:13:54.656 "data_size": 63488 00:13:54.656 }, 00:13:54.656 { 00:13:54.656 "name": "BaseBdev4", 00:13:54.656 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:54.656 "is_configured": true, 00:13:54.656 "data_offset": 2048, 00:13:54.656 "data_size": 63488 00:13:54.656 } 00:13:54.656 ] 00:13:54.656 }' 00:13:54.656 02:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.656 [2024-11-28 02:29:28.074159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.656 [2024-11-28 02:29:28.119594] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.656 [2024-11-28 02:29:28.119730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.656 [2024-11-28 02:29:28.119753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.656 [2024-11-28 02:29:28.119763] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.656 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.656 "name": "raid_bdev1", 00:13:54.656 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:54.656 "strip_size_kb": 0, 00:13:54.656 "state": "online", 00:13:54.656 "raid_level": "raid1", 00:13:54.656 "superblock": true, 00:13:54.656 "num_base_bdevs": 4, 00:13:54.656 "num_base_bdevs_discovered": 2, 00:13:54.656 "num_base_bdevs_operational": 2, 00:13:54.656 "base_bdevs_list": [ 00:13:54.656 { 00:13:54.656 "name": null, 00:13:54.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.656 "is_configured": false, 00:13:54.656 "data_offset": 0, 00:13:54.656 "data_size": 63488 00:13:54.656 }, 00:13:54.656 { 00:13:54.656 "name": null, 00:13:54.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.657 "is_configured": false, 00:13:54.657 "data_offset": 2048, 00:13:54.657 "data_size": 63488 00:13:54.657 }, 00:13:54.657 { 00:13:54.657 "name": "BaseBdev3", 00:13:54.657 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:54.657 "is_configured": true, 00:13:54.657 "data_offset": 2048, 00:13:54.657 "data_size": 63488 00:13:54.657 }, 00:13:54.657 { 00:13:54.657 "name": "BaseBdev4", 00:13:54.657 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:54.657 "is_configured": true, 00:13:54.657 "data_offset": 2048, 00:13:54.657 "data_size": 63488 00:13:54.657 } 00:13:54.657 ] 00:13:54.657 }' 00:13:54.657 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.657 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.917 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.917 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.917 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.917 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.917 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.917 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.917 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.917 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.917 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.177 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.177 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.177 "name": "raid_bdev1", 00:13:55.177 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:55.177 "strip_size_kb": 0, 00:13:55.177 "state": "online", 00:13:55.177 "raid_level": "raid1", 00:13:55.177 "superblock": true, 00:13:55.177 "num_base_bdevs": 4, 00:13:55.177 "num_base_bdevs_discovered": 2, 00:13:55.177 "num_base_bdevs_operational": 2, 00:13:55.177 "base_bdevs_list": [ 00:13:55.177 { 00:13:55.177 "name": null, 00:13:55.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.177 "is_configured": false, 00:13:55.177 "data_offset": 0, 00:13:55.177 "data_size": 63488 00:13:55.177 }, 00:13:55.177 { 00:13:55.177 "name": null, 00:13:55.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.177 "is_configured": false, 00:13:55.177 "data_offset": 2048, 00:13:55.177 "data_size": 63488 00:13:55.177 }, 00:13:55.177 { 00:13:55.177 "name": "BaseBdev3", 00:13:55.177 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:55.177 "is_configured": true, 00:13:55.177 "data_offset": 2048, 00:13:55.177 "data_size": 63488 00:13:55.177 }, 00:13:55.177 { 00:13:55.177 "name": "BaseBdev4", 00:13:55.177 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:55.177 "is_configured": true, 00:13:55.177 "data_offset": 2048, 00:13:55.177 "data_size": 63488 00:13:55.178 } 00:13:55.178 ] 00:13:55.178 }' 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.178 [2024-11-28 02:29:28.755355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:55.178 [2024-11-28 02:29:28.755418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.178 [2024-11-28 02:29:28.755442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:55.178 [2024-11-28 02:29:28.755453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.178 [2024-11-28 02:29:28.755956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.178 [2024-11-28 02:29:28.755978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.178 [2024-11-28 02:29:28.756067] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:55.178 [2024-11-28 02:29:28.756081] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:55.178 [2024-11-28 02:29:28.756098] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:55.178 [2024-11-28 02:29:28.756109] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:55.178 BaseBdev1 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.178 02:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.124 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.125 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.125 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.404 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.404 "name": "raid_bdev1", 00:13:56.404 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:56.404 "strip_size_kb": 0, 00:13:56.404 "state": "online", 00:13:56.404 "raid_level": "raid1", 00:13:56.404 "superblock": true, 00:13:56.404 "num_base_bdevs": 4, 00:13:56.404 "num_base_bdevs_discovered": 2, 00:13:56.404 "num_base_bdevs_operational": 2, 00:13:56.404 "base_bdevs_list": [ 00:13:56.404 { 00:13:56.404 "name": null, 00:13:56.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.404 "is_configured": false, 00:13:56.404 "data_offset": 0, 00:13:56.404 "data_size": 63488 00:13:56.404 }, 00:13:56.404 { 00:13:56.404 "name": null, 00:13:56.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.404 "is_configured": false, 00:13:56.404 "data_offset": 2048, 00:13:56.404 "data_size": 63488 00:13:56.404 }, 00:13:56.404 { 00:13:56.404 "name": "BaseBdev3", 00:13:56.404 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:56.404 "is_configured": true, 00:13:56.404 "data_offset": 2048, 00:13:56.404 "data_size": 63488 00:13:56.404 }, 00:13:56.404 { 00:13:56.404 "name": "BaseBdev4", 00:13:56.404 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:56.404 "is_configured": true, 00:13:56.404 "data_offset": 2048, 00:13:56.404 "data_size": 63488 00:13:56.404 } 00:13:56.404 ] 00:13:56.404 }' 00:13:56.404 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.404 02:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.678 "name": "raid_bdev1", 00:13:56.678 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:56.678 "strip_size_kb": 0, 00:13:56.678 "state": "online", 00:13:56.678 "raid_level": "raid1", 00:13:56.678 "superblock": true, 00:13:56.678 "num_base_bdevs": 4, 00:13:56.678 "num_base_bdevs_discovered": 2, 00:13:56.678 "num_base_bdevs_operational": 2, 00:13:56.678 "base_bdevs_list": [ 00:13:56.678 { 00:13:56.678 "name": null, 00:13:56.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.678 "is_configured": false, 00:13:56.678 "data_offset": 0, 00:13:56.678 "data_size": 63488 00:13:56.678 }, 00:13:56.678 { 00:13:56.678 "name": null, 00:13:56.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.678 "is_configured": false, 00:13:56.678 "data_offset": 2048, 00:13:56.678 "data_size": 63488 00:13:56.678 }, 00:13:56.678 { 00:13:56.678 "name": "BaseBdev3", 00:13:56.678 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:56.678 "is_configured": true, 00:13:56.678 "data_offset": 2048, 00:13:56.678 "data_size": 63488 00:13:56.678 }, 00:13:56.678 { 00:13:56.678 "name": "BaseBdev4", 00:13:56.678 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:56.678 "is_configured": true, 00:13:56.678 "data_offset": 2048, 00:13:56.678 "data_size": 63488 00:13:56.678 } 00:13:56.678 ] 00:13:56.678 }' 00:13:56.678 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.679 [2024-11-28 02:29:30.281063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.679 [2024-11-28 02:29:30.281235] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:56.679 [2024-11-28 02:29:30.281252] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:56.679 request: 00:13:56.679 { 00:13:56.679 "base_bdev": "BaseBdev1", 00:13:56.679 "raid_bdev": "raid_bdev1", 00:13:56.679 "method": "bdev_raid_add_base_bdev", 00:13:56.679 "req_id": 1 00:13:56.679 } 00:13:56.679 Got JSON-RPC error response 00:13:56.679 response: 00:13:56.679 { 00:13:56.679 "code": -22, 00:13:56.679 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:56.679 } 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:56.679 02:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:57.617 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.877 "name": "raid_bdev1", 00:13:57.877 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:57.877 "strip_size_kb": 0, 00:13:57.877 "state": "online", 00:13:57.877 "raid_level": "raid1", 00:13:57.877 "superblock": true, 00:13:57.877 "num_base_bdevs": 4, 00:13:57.877 "num_base_bdevs_discovered": 2, 00:13:57.877 "num_base_bdevs_operational": 2, 00:13:57.877 "base_bdevs_list": [ 00:13:57.877 { 00:13:57.877 "name": null, 00:13:57.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.877 "is_configured": false, 00:13:57.877 "data_offset": 0, 00:13:57.877 "data_size": 63488 00:13:57.877 }, 00:13:57.877 { 00:13:57.877 "name": null, 00:13:57.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.877 "is_configured": false, 00:13:57.877 "data_offset": 2048, 00:13:57.877 "data_size": 63488 00:13:57.877 }, 00:13:57.877 { 00:13:57.877 "name": "BaseBdev3", 00:13:57.877 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:57.877 "is_configured": true, 00:13:57.877 "data_offset": 2048, 00:13:57.877 "data_size": 63488 00:13:57.877 }, 00:13:57.877 { 00:13:57.877 "name": "BaseBdev4", 00:13:57.877 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:57.877 "is_configured": true, 00:13:57.877 "data_offset": 2048, 00:13:57.877 "data_size": 63488 00:13:57.877 } 00:13:57.877 ] 00:13:57.877 }' 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.877 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.138 "name": "raid_bdev1", 00:13:58.138 "uuid": "274f1b44-7149-4e55-a65c-c136355557b9", 00:13:58.138 "strip_size_kb": 0, 00:13:58.138 "state": "online", 00:13:58.138 "raid_level": "raid1", 00:13:58.138 "superblock": true, 00:13:58.138 "num_base_bdevs": 4, 00:13:58.138 "num_base_bdevs_discovered": 2, 00:13:58.138 "num_base_bdevs_operational": 2, 00:13:58.138 "base_bdevs_list": [ 00:13:58.138 { 00:13:58.138 "name": null, 00:13:58.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.138 "is_configured": false, 00:13:58.138 "data_offset": 0, 00:13:58.138 "data_size": 63488 00:13:58.138 }, 00:13:58.138 { 00:13:58.138 "name": null, 00:13:58.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.138 "is_configured": false, 00:13:58.138 "data_offset": 2048, 00:13:58.138 "data_size": 63488 00:13:58.138 }, 00:13:58.138 { 00:13:58.138 "name": "BaseBdev3", 00:13:58.138 "uuid": "d79a24fe-dd64-5cbe-95c2-a4ae6e921c0d", 00:13:58.138 "is_configured": true, 00:13:58.138 "data_offset": 2048, 00:13:58.138 "data_size": 63488 00:13:58.138 }, 00:13:58.138 { 00:13:58.138 "name": "BaseBdev4", 00:13:58.138 "uuid": "d978f73c-6f0f-50d4-a80f-2f354205b77c", 00:13:58.138 "is_configured": true, 00:13:58.138 "data_offset": 2048, 00:13:58.138 "data_size": 63488 00:13:58.138 } 00:13:58.138 ] 00:13:58.138 }' 00:13:58.138 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.398 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.398 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78913 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78913 ']' 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78913 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78913 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.399 killing process with pid 78913 00:13:58.399 Received shutdown signal, test time was about 18.880949 seconds 00:13:58.399 00:13:58.399 Latency(us) 00:13:58.399 [2024-11-28T02:29:32.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.399 [2024-11-28T02:29:32.078Z] =================================================================================================================== 00:13:58.399 [2024-11-28T02:29:32.078Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78913' 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78913 00:13:58.399 [2024-11-28 02:29:31.916884] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.399 [2024-11-28 02:29:31.917036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.399 02:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78913 00:13:58.399 [2024-11-28 02:29:31.917109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.399 [2024-11-28 02:29:31.917122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:58.659 [2024-11-28 02:29:32.313055] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:00.041 02:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:00.041 00:14:00.041 real 0m22.299s 00:14:00.041 user 0m28.832s 00:14:00.041 sys 0m2.618s 00:14:00.041 ************************************ 00:14:00.041 END TEST raid_rebuild_test_sb_io 00:14:00.041 ************************************ 00:14:00.041 02:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.041 02:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.041 02:29:33 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:00.041 02:29:33 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:00.041 02:29:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:00.041 02:29:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.041 02:29:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:00.041 ************************************ 00:14:00.041 START TEST raid5f_state_function_test 00:14:00.042 ************************************ 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79654 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:00.042 Process raid pid: 79654 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79654' 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79654 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79654 ']' 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.042 02:29:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.042 [2024-11-28 02:29:33.605692] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:00.042 [2024-11-28 02:29:33.605893] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.302 [2024-11-28 02:29:33.779315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.302 [2024-11-28 02:29:33.889018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.562 [2024-11-28 02:29:34.087452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.562 [2024-11-28 02:29:34.087567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.822 [2024-11-28 02:29:34.429255] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.822 [2024-11-28 02:29:34.429319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.822 [2024-11-28 02:29:34.429331] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.822 [2024-11-28 02:29:34.429342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.822 [2024-11-28 02:29:34.429350] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.822 [2024-11-28 02:29:34.429361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.822 "name": "Existed_Raid", 00:14:00.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.822 "strip_size_kb": 64, 00:14:00.822 "state": "configuring", 00:14:00.822 "raid_level": "raid5f", 00:14:00.822 "superblock": false, 00:14:00.822 "num_base_bdevs": 3, 00:14:00.822 "num_base_bdevs_discovered": 0, 00:14:00.822 "num_base_bdevs_operational": 3, 00:14:00.822 "base_bdevs_list": [ 00:14:00.822 { 00:14:00.822 "name": "BaseBdev1", 00:14:00.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.822 "is_configured": false, 00:14:00.822 "data_offset": 0, 00:14:00.822 "data_size": 0 00:14:00.822 }, 00:14:00.822 { 00:14:00.822 "name": "BaseBdev2", 00:14:00.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.822 "is_configured": false, 00:14:00.822 "data_offset": 0, 00:14:00.822 "data_size": 0 00:14:00.822 }, 00:14:00.822 { 00:14:00.822 "name": "BaseBdev3", 00:14:00.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.822 "is_configured": false, 00:14:00.822 "data_offset": 0, 00:14:00.822 "data_size": 0 00:14:00.822 } 00:14:00.822 ] 00:14:00.822 }' 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.822 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.393 [2024-11-28 02:29:34.848502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.393 [2024-11-28 02:29:34.848591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.393 [2024-11-28 02:29:34.860486] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:01.393 [2024-11-28 02:29:34.860592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:01.393 [2024-11-28 02:29:34.860627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.393 [2024-11-28 02:29:34.860656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.393 [2024-11-28 02:29:34.860696] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.393 [2024-11-28 02:29:34.860740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.393 [2024-11-28 02:29:34.902224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.393 BaseBdev1 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.393 [ 00:14:01.393 { 00:14:01.393 "name": "BaseBdev1", 00:14:01.393 "aliases": [ 00:14:01.393 "2e5ca132-ae8c-41b1-9360-6a739979208a" 00:14:01.393 ], 00:14:01.393 "product_name": "Malloc disk", 00:14:01.393 "block_size": 512, 00:14:01.393 "num_blocks": 65536, 00:14:01.393 "uuid": "2e5ca132-ae8c-41b1-9360-6a739979208a", 00:14:01.393 "assigned_rate_limits": { 00:14:01.393 "rw_ios_per_sec": 0, 00:14:01.393 "rw_mbytes_per_sec": 0, 00:14:01.393 "r_mbytes_per_sec": 0, 00:14:01.393 "w_mbytes_per_sec": 0 00:14:01.393 }, 00:14:01.393 "claimed": true, 00:14:01.393 "claim_type": "exclusive_write", 00:14:01.393 "zoned": false, 00:14:01.393 "supported_io_types": { 00:14:01.393 "read": true, 00:14:01.393 "write": true, 00:14:01.393 "unmap": true, 00:14:01.393 "flush": true, 00:14:01.393 "reset": true, 00:14:01.393 "nvme_admin": false, 00:14:01.393 "nvme_io": false, 00:14:01.393 "nvme_io_md": false, 00:14:01.393 "write_zeroes": true, 00:14:01.393 "zcopy": true, 00:14:01.393 "get_zone_info": false, 00:14:01.393 "zone_management": false, 00:14:01.393 "zone_append": false, 00:14:01.393 "compare": false, 00:14:01.393 "compare_and_write": false, 00:14:01.393 "abort": true, 00:14:01.393 "seek_hole": false, 00:14:01.393 "seek_data": false, 00:14:01.393 "copy": true, 00:14:01.393 "nvme_iov_md": false 00:14:01.393 }, 00:14:01.393 "memory_domains": [ 00:14:01.393 { 00:14:01.393 "dma_device_id": "system", 00:14:01.393 "dma_device_type": 1 00:14:01.393 }, 00:14:01.393 { 00:14:01.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.393 "dma_device_type": 2 00:14:01.393 } 00:14:01.393 ], 00:14:01.393 "driver_specific": {} 00:14:01.393 } 00:14:01.393 ] 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.393 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.393 "name": "Existed_Raid", 00:14:01.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.393 "strip_size_kb": 64, 00:14:01.393 "state": "configuring", 00:14:01.393 "raid_level": "raid5f", 00:14:01.393 "superblock": false, 00:14:01.393 "num_base_bdevs": 3, 00:14:01.393 "num_base_bdevs_discovered": 1, 00:14:01.393 "num_base_bdevs_operational": 3, 00:14:01.393 "base_bdevs_list": [ 00:14:01.393 { 00:14:01.393 "name": "BaseBdev1", 00:14:01.393 "uuid": "2e5ca132-ae8c-41b1-9360-6a739979208a", 00:14:01.393 "is_configured": true, 00:14:01.393 "data_offset": 0, 00:14:01.393 "data_size": 65536 00:14:01.393 }, 00:14:01.393 { 00:14:01.393 "name": "BaseBdev2", 00:14:01.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.393 "is_configured": false, 00:14:01.393 "data_offset": 0, 00:14:01.393 "data_size": 0 00:14:01.393 }, 00:14:01.393 { 00:14:01.393 "name": "BaseBdev3", 00:14:01.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.394 "is_configured": false, 00:14:01.394 "data_offset": 0, 00:14:01.394 "data_size": 0 00:14:01.394 } 00:14:01.394 ] 00:14:01.394 }' 00:14:01.394 02:29:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.394 02:29:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.962 [2024-11-28 02:29:35.345516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.962 [2024-11-28 02:29:35.345572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.962 [2024-11-28 02:29:35.357545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.962 [2024-11-28 02:29:35.359313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.962 [2024-11-28 02:29:35.359354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.962 [2024-11-28 02:29:35.359365] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.962 [2024-11-28 02:29:35.359376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.962 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.963 "name": "Existed_Raid", 00:14:01.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.963 "strip_size_kb": 64, 00:14:01.963 "state": "configuring", 00:14:01.963 "raid_level": "raid5f", 00:14:01.963 "superblock": false, 00:14:01.963 "num_base_bdevs": 3, 00:14:01.963 "num_base_bdevs_discovered": 1, 00:14:01.963 "num_base_bdevs_operational": 3, 00:14:01.963 "base_bdevs_list": [ 00:14:01.963 { 00:14:01.963 "name": "BaseBdev1", 00:14:01.963 "uuid": "2e5ca132-ae8c-41b1-9360-6a739979208a", 00:14:01.963 "is_configured": true, 00:14:01.963 "data_offset": 0, 00:14:01.963 "data_size": 65536 00:14:01.963 }, 00:14:01.963 { 00:14:01.963 "name": "BaseBdev2", 00:14:01.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.963 "is_configured": false, 00:14:01.963 "data_offset": 0, 00:14:01.963 "data_size": 0 00:14:01.963 }, 00:14:01.963 { 00:14:01.963 "name": "BaseBdev3", 00:14:01.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.963 "is_configured": false, 00:14:01.963 "data_offset": 0, 00:14:01.963 "data_size": 0 00:14:01.963 } 00:14:01.963 ] 00:14:01.963 }' 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.963 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.222 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.223 [2024-11-28 02:29:35.866152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.223 BaseBdev2 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.223 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.223 [ 00:14:02.223 { 00:14:02.223 "name": "BaseBdev2", 00:14:02.223 "aliases": [ 00:14:02.223 "a1cf02c0-8d51-44a7-8a79-efa71a492cb0" 00:14:02.223 ], 00:14:02.223 "product_name": "Malloc disk", 00:14:02.223 "block_size": 512, 00:14:02.223 "num_blocks": 65536, 00:14:02.223 "uuid": "a1cf02c0-8d51-44a7-8a79-efa71a492cb0", 00:14:02.223 "assigned_rate_limits": { 00:14:02.223 "rw_ios_per_sec": 0, 00:14:02.223 "rw_mbytes_per_sec": 0, 00:14:02.223 "r_mbytes_per_sec": 0, 00:14:02.223 "w_mbytes_per_sec": 0 00:14:02.223 }, 00:14:02.223 "claimed": true, 00:14:02.223 "claim_type": "exclusive_write", 00:14:02.223 "zoned": false, 00:14:02.223 "supported_io_types": { 00:14:02.223 "read": true, 00:14:02.223 "write": true, 00:14:02.223 "unmap": true, 00:14:02.223 "flush": true, 00:14:02.223 "reset": true, 00:14:02.223 "nvme_admin": false, 00:14:02.483 "nvme_io": false, 00:14:02.483 "nvme_io_md": false, 00:14:02.483 "write_zeroes": true, 00:14:02.483 "zcopy": true, 00:14:02.483 "get_zone_info": false, 00:14:02.483 "zone_management": false, 00:14:02.483 "zone_append": false, 00:14:02.483 "compare": false, 00:14:02.483 "compare_and_write": false, 00:14:02.483 "abort": true, 00:14:02.483 "seek_hole": false, 00:14:02.483 "seek_data": false, 00:14:02.483 "copy": true, 00:14:02.483 "nvme_iov_md": false 00:14:02.483 }, 00:14:02.483 "memory_domains": [ 00:14:02.483 { 00:14:02.483 "dma_device_id": "system", 00:14:02.483 "dma_device_type": 1 00:14:02.483 }, 00:14:02.483 { 00:14:02.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.483 "dma_device_type": 2 00:14:02.483 } 00:14:02.483 ], 00:14:02.483 "driver_specific": {} 00:14:02.483 } 00:14:02.483 ] 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.483 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.483 "name": "Existed_Raid", 00:14:02.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.483 "strip_size_kb": 64, 00:14:02.483 "state": "configuring", 00:14:02.483 "raid_level": "raid5f", 00:14:02.483 "superblock": false, 00:14:02.483 "num_base_bdevs": 3, 00:14:02.483 "num_base_bdevs_discovered": 2, 00:14:02.483 "num_base_bdevs_operational": 3, 00:14:02.483 "base_bdevs_list": [ 00:14:02.483 { 00:14:02.483 "name": "BaseBdev1", 00:14:02.483 "uuid": "2e5ca132-ae8c-41b1-9360-6a739979208a", 00:14:02.483 "is_configured": true, 00:14:02.483 "data_offset": 0, 00:14:02.483 "data_size": 65536 00:14:02.483 }, 00:14:02.483 { 00:14:02.483 "name": "BaseBdev2", 00:14:02.483 "uuid": "a1cf02c0-8d51-44a7-8a79-efa71a492cb0", 00:14:02.483 "is_configured": true, 00:14:02.483 "data_offset": 0, 00:14:02.483 "data_size": 65536 00:14:02.484 }, 00:14:02.484 { 00:14:02.484 "name": "BaseBdev3", 00:14:02.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.484 "is_configured": false, 00:14:02.484 "data_offset": 0, 00:14:02.484 "data_size": 0 00:14:02.484 } 00:14:02.484 ] 00:14:02.484 }' 00:14:02.484 02:29:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.484 02:29:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.744 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:02.744 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.744 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.744 [2024-11-28 02:29:36.421318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.744 [2024-11-28 02:29:36.421402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:02.744 [2024-11-28 02:29:36.421420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:02.744 [2024-11-28 02:29:36.421690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:03.004 [2024-11-28 02:29:36.427170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:03.004 [2024-11-28 02:29:36.427195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:03.004 [2024-11-28 02:29:36.427514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.004 BaseBdev3 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.004 [ 00:14:03.004 { 00:14:03.004 "name": "BaseBdev3", 00:14:03.004 "aliases": [ 00:14:03.004 "61f9f3a9-9cea-4489-b380-60801ea7da34" 00:14:03.004 ], 00:14:03.004 "product_name": "Malloc disk", 00:14:03.004 "block_size": 512, 00:14:03.004 "num_blocks": 65536, 00:14:03.004 "uuid": "61f9f3a9-9cea-4489-b380-60801ea7da34", 00:14:03.004 "assigned_rate_limits": { 00:14:03.004 "rw_ios_per_sec": 0, 00:14:03.004 "rw_mbytes_per_sec": 0, 00:14:03.004 "r_mbytes_per_sec": 0, 00:14:03.004 "w_mbytes_per_sec": 0 00:14:03.004 }, 00:14:03.004 "claimed": true, 00:14:03.004 "claim_type": "exclusive_write", 00:14:03.004 "zoned": false, 00:14:03.004 "supported_io_types": { 00:14:03.004 "read": true, 00:14:03.004 "write": true, 00:14:03.004 "unmap": true, 00:14:03.004 "flush": true, 00:14:03.004 "reset": true, 00:14:03.004 "nvme_admin": false, 00:14:03.004 "nvme_io": false, 00:14:03.004 "nvme_io_md": false, 00:14:03.004 "write_zeroes": true, 00:14:03.004 "zcopy": true, 00:14:03.004 "get_zone_info": false, 00:14:03.004 "zone_management": false, 00:14:03.004 "zone_append": false, 00:14:03.004 "compare": false, 00:14:03.004 "compare_and_write": false, 00:14:03.004 "abort": true, 00:14:03.004 "seek_hole": false, 00:14:03.004 "seek_data": false, 00:14:03.004 "copy": true, 00:14:03.004 "nvme_iov_md": false 00:14:03.004 }, 00:14:03.004 "memory_domains": [ 00:14:03.004 { 00:14:03.004 "dma_device_id": "system", 00:14:03.004 "dma_device_type": 1 00:14:03.004 }, 00:14:03.004 { 00:14:03.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.004 "dma_device_type": 2 00:14:03.004 } 00:14:03.004 ], 00:14:03.004 "driver_specific": {} 00:14:03.004 } 00:14:03.004 ] 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.004 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.005 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.005 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.005 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.005 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.005 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.005 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.005 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.005 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.005 "name": "Existed_Raid", 00:14:03.005 "uuid": "07477db3-5c14-41ac-a9ea-ede673cf0211", 00:14:03.005 "strip_size_kb": 64, 00:14:03.005 "state": "online", 00:14:03.005 "raid_level": "raid5f", 00:14:03.005 "superblock": false, 00:14:03.005 "num_base_bdevs": 3, 00:14:03.005 "num_base_bdevs_discovered": 3, 00:14:03.005 "num_base_bdevs_operational": 3, 00:14:03.005 "base_bdevs_list": [ 00:14:03.005 { 00:14:03.005 "name": "BaseBdev1", 00:14:03.005 "uuid": "2e5ca132-ae8c-41b1-9360-6a739979208a", 00:14:03.005 "is_configured": true, 00:14:03.005 "data_offset": 0, 00:14:03.005 "data_size": 65536 00:14:03.005 }, 00:14:03.005 { 00:14:03.005 "name": "BaseBdev2", 00:14:03.005 "uuid": "a1cf02c0-8d51-44a7-8a79-efa71a492cb0", 00:14:03.005 "is_configured": true, 00:14:03.005 "data_offset": 0, 00:14:03.005 "data_size": 65536 00:14:03.005 }, 00:14:03.005 { 00:14:03.005 "name": "BaseBdev3", 00:14:03.005 "uuid": "61f9f3a9-9cea-4489-b380-60801ea7da34", 00:14:03.005 "is_configured": true, 00:14:03.005 "data_offset": 0, 00:14:03.005 "data_size": 65536 00:14:03.005 } 00:14:03.005 ] 00:14:03.005 }' 00:14:03.005 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.005 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.265 [2024-11-28 02:29:36.916881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.265 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.265 "name": "Existed_Raid", 00:14:03.265 "aliases": [ 00:14:03.265 "07477db3-5c14-41ac-a9ea-ede673cf0211" 00:14:03.265 ], 00:14:03.265 "product_name": "Raid Volume", 00:14:03.265 "block_size": 512, 00:14:03.265 "num_blocks": 131072, 00:14:03.265 "uuid": "07477db3-5c14-41ac-a9ea-ede673cf0211", 00:14:03.265 "assigned_rate_limits": { 00:14:03.265 "rw_ios_per_sec": 0, 00:14:03.265 "rw_mbytes_per_sec": 0, 00:14:03.265 "r_mbytes_per_sec": 0, 00:14:03.265 "w_mbytes_per_sec": 0 00:14:03.265 }, 00:14:03.265 "claimed": false, 00:14:03.265 "zoned": false, 00:14:03.265 "supported_io_types": { 00:14:03.265 "read": true, 00:14:03.265 "write": true, 00:14:03.265 "unmap": false, 00:14:03.265 "flush": false, 00:14:03.265 "reset": true, 00:14:03.265 "nvme_admin": false, 00:14:03.265 "nvme_io": false, 00:14:03.265 "nvme_io_md": false, 00:14:03.265 "write_zeroes": true, 00:14:03.265 "zcopy": false, 00:14:03.265 "get_zone_info": false, 00:14:03.265 "zone_management": false, 00:14:03.265 "zone_append": false, 00:14:03.265 "compare": false, 00:14:03.265 "compare_and_write": false, 00:14:03.265 "abort": false, 00:14:03.265 "seek_hole": false, 00:14:03.265 "seek_data": false, 00:14:03.265 "copy": false, 00:14:03.265 "nvme_iov_md": false 00:14:03.265 }, 00:14:03.265 "driver_specific": { 00:14:03.265 "raid": { 00:14:03.265 "uuid": "07477db3-5c14-41ac-a9ea-ede673cf0211", 00:14:03.265 "strip_size_kb": 64, 00:14:03.265 "state": "online", 00:14:03.265 "raid_level": "raid5f", 00:14:03.265 "superblock": false, 00:14:03.265 "num_base_bdevs": 3, 00:14:03.265 "num_base_bdevs_discovered": 3, 00:14:03.265 "num_base_bdevs_operational": 3, 00:14:03.265 "base_bdevs_list": [ 00:14:03.265 { 00:14:03.265 "name": "BaseBdev1", 00:14:03.265 "uuid": "2e5ca132-ae8c-41b1-9360-6a739979208a", 00:14:03.265 "is_configured": true, 00:14:03.265 "data_offset": 0, 00:14:03.265 "data_size": 65536 00:14:03.265 }, 00:14:03.265 { 00:14:03.265 "name": "BaseBdev2", 00:14:03.265 "uuid": "a1cf02c0-8d51-44a7-8a79-efa71a492cb0", 00:14:03.265 "is_configured": true, 00:14:03.265 "data_offset": 0, 00:14:03.265 "data_size": 65536 00:14:03.265 }, 00:14:03.265 { 00:14:03.265 "name": "BaseBdev3", 00:14:03.265 "uuid": "61f9f3a9-9cea-4489-b380-60801ea7da34", 00:14:03.265 "is_configured": true, 00:14:03.266 "data_offset": 0, 00:14:03.266 "data_size": 65536 00:14:03.266 } 00:14:03.266 ] 00:14:03.266 } 00:14:03.266 } 00:14:03.266 }' 00:14:03.266 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.526 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:03.526 BaseBdev2 00:14:03.526 BaseBdev3' 00:14:03.526 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.526 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.526 [2024-11-28 02:29:37.184252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.786 "name": "Existed_Raid", 00:14:03.786 "uuid": "07477db3-5c14-41ac-a9ea-ede673cf0211", 00:14:03.786 "strip_size_kb": 64, 00:14:03.786 "state": "online", 00:14:03.786 "raid_level": "raid5f", 00:14:03.786 "superblock": false, 00:14:03.786 "num_base_bdevs": 3, 00:14:03.786 "num_base_bdevs_discovered": 2, 00:14:03.786 "num_base_bdevs_operational": 2, 00:14:03.786 "base_bdevs_list": [ 00:14:03.786 { 00:14:03.786 "name": null, 00:14:03.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.786 "is_configured": false, 00:14:03.786 "data_offset": 0, 00:14:03.786 "data_size": 65536 00:14:03.786 }, 00:14:03.786 { 00:14:03.786 "name": "BaseBdev2", 00:14:03.786 "uuid": "a1cf02c0-8d51-44a7-8a79-efa71a492cb0", 00:14:03.786 "is_configured": true, 00:14:03.786 "data_offset": 0, 00:14:03.786 "data_size": 65536 00:14:03.786 }, 00:14:03.786 { 00:14:03.786 "name": "BaseBdev3", 00:14:03.786 "uuid": "61f9f3a9-9cea-4489-b380-60801ea7da34", 00:14:03.786 "is_configured": true, 00:14:03.786 "data_offset": 0, 00:14:03.786 "data_size": 65536 00:14:03.786 } 00:14:03.786 ] 00:14:03.786 }' 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.786 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.046 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:04.046 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:04.046 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.046 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:04.046 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.046 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.046 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.306 [2024-11-28 02:29:37.739837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:04.306 [2024-11-28 02:29:37.740016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.306 [2024-11-28 02:29:37.844012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.306 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.306 [2024-11-28 02:29:37.883985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:04.306 [2024-11-28 02:29:37.884052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:04.567 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.567 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:04.567 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:04.567 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:04.567 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.567 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.567 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.567 BaseBdev2 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.567 [ 00:14:04.567 { 00:14:04.567 "name": "BaseBdev2", 00:14:04.567 "aliases": [ 00:14:04.567 "c28298ac-5a27-451d-a298-5f7b011d4613" 00:14:04.567 ], 00:14:04.567 "product_name": "Malloc disk", 00:14:04.567 "block_size": 512, 00:14:04.567 "num_blocks": 65536, 00:14:04.567 "uuid": "c28298ac-5a27-451d-a298-5f7b011d4613", 00:14:04.567 "assigned_rate_limits": { 00:14:04.567 "rw_ios_per_sec": 0, 00:14:04.567 "rw_mbytes_per_sec": 0, 00:14:04.567 "r_mbytes_per_sec": 0, 00:14:04.567 "w_mbytes_per_sec": 0 00:14:04.567 }, 00:14:04.567 "claimed": false, 00:14:04.567 "zoned": false, 00:14:04.567 "supported_io_types": { 00:14:04.567 "read": true, 00:14:04.567 "write": true, 00:14:04.567 "unmap": true, 00:14:04.567 "flush": true, 00:14:04.567 "reset": true, 00:14:04.567 "nvme_admin": false, 00:14:04.567 "nvme_io": false, 00:14:04.567 "nvme_io_md": false, 00:14:04.567 "write_zeroes": true, 00:14:04.567 "zcopy": true, 00:14:04.567 "get_zone_info": false, 00:14:04.567 "zone_management": false, 00:14:04.567 "zone_append": false, 00:14:04.567 "compare": false, 00:14:04.567 "compare_and_write": false, 00:14:04.567 "abort": true, 00:14:04.567 "seek_hole": false, 00:14:04.567 "seek_data": false, 00:14:04.567 "copy": true, 00:14:04.567 "nvme_iov_md": false 00:14:04.567 }, 00:14:04.567 "memory_domains": [ 00:14:04.567 { 00:14:04.567 "dma_device_id": "system", 00:14:04.567 "dma_device_type": 1 00:14:04.567 }, 00:14:04.567 { 00:14:04.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.567 "dma_device_type": 2 00:14:04.567 } 00:14:04.567 ], 00:14:04.567 "driver_specific": {} 00:14:04.567 } 00:14:04.567 ] 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.567 BaseBdev3 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.567 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.568 [ 00:14:04.568 { 00:14:04.568 "name": "BaseBdev3", 00:14:04.568 "aliases": [ 00:14:04.568 "9e623c16-07a4-4ecc-8e33-3389955256bd" 00:14:04.568 ], 00:14:04.568 "product_name": "Malloc disk", 00:14:04.568 "block_size": 512, 00:14:04.568 "num_blocks": 65536, 00:14:04.568 "uuid": "9e623c16-07a4-4ecc-8e33-3389955256bd", 00:14:04.568 "assigned_rate_limits": { 00:14:04.568 "rw_ios_per_sec": 0, 00:14:04.568 "rw_mbytes_per_sec": 0, 00:14:04.568 "r_mbytes_per_sec": 0, 00:14:04.568 "w_mbytes_per_sec": 0 00:14:04.568 }, 00:14:04.568 "claimed": false, 00:14:04.568 "zoned": false, 00:14:04.568 "supported_io_types": { 00:14:04.568 "read": true, 00:14:04.568 "write": true, 00:14:04.568 "unmap": true, 00:14:04.568 "flush": true, 00:14:04.568 "reset": true, 00:14:04.568 "nvme_admin": false, 00:14:04.568 "nvme_io": false, 00:14:04.568 "nvme_io_md": false, 00:14:04.568 "write_zeroes": true, 00:14:04.568 "zcopy": true, 00:14:04.568 "get_zone_info": false, 00:14:04.568 "zone_management": false, 00:14:04.568 "zone_append": false, 00:14:04.568 "compare": false, 00:14:04.568 "compare_and_write": false, 00:14:04.568 "abort": true, 00:14:04.568 "seek_hole": false, 00:14:04.568 "seek_data": false, 00:14:04.568 "copy": true, 00:14:04.568 "nvme_iov_md": false 00:14:04.568 }, 00:14:04.568 "memory_domains": [ 00:14:04.568 { 00:14:04.568 "dma_device_id": "system", 00:14:04.568 "dma_device_type": 1 00:14:04.568 }, 00:14:04.568 { 00:14:04.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.568 "dma_device_type": 2 00:14:04.568 } 00:14:04.568 ], 00:14:04.568 "driver_specific": {} 00:14:04.568 } 00:14:04.568 ] 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.568 [2024-11-28 02:29:38.216622] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:04.568 [2024-11-28 02:29:38.216772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:04.568 [2024-11-28 02:29:38.216832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.568 [2024-11-28 02:29:38.219023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.568 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.828 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.828 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.828 "name": "Existed_Raid", 00:14:04.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.828 "strip_size_kb": 64, 00:14:04.828 "state": "configuring", 00:14:04.828 "raid_level": "raid5f", 00:14:04.828 "superblock": false, 00:14:04.828 "num_base_bdevs": 3, 00:14:04.828 "num_base_bdevs_discovered": 2, 00:14:04.828 "num_base_bdevs_operational": 3, 00:14:04.828 "base_bdevs_list": [ 00:14:04.828 { 00:14:04.828 "name": "BaseBdev1", 00:14:04.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.828 "is_configured": false, 00:14:04.828 "data_offset": 0, 00:14:04.828 "data_size": 0 00:14:04.828 }, 00:14:04.828 { 00:14:04.828 "name": "BaseBdev2", 00:14:04.828 "uuid": "c28298ac-5a27-451d-a298-5f7b011d4613", 00:14:04.828 "is_configured": true, 00:14:04.828 "data_offset": 0, 00:14:04.828 "data_size": 65536 00:14:04.828 }, 00:14:04.828 { 00:14:04.828 "name": "BaseBdev3", 00:14:04.828 "uuid": "9e623c16-07a4-4ecc-8e33-3389955256bd", 00:14:04.828 "is_configured": true, 00:14:04.828 "data_offset": 0, 00:14:04.828 "data_size": 65536 00:14:04.828 } 00:14:04.828 ] 00:14:04.828 }' 00:14:04.828 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.828 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.088 [2024-11-28 02:29:38.659908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.088 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.088 "name": "Existed_Raid", 00:14:05.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.088 "strip_size_kb": 64, 00:14:05.088 "state": "configuring", 00:14:05.088 "raid_level": "raid5f", 00:14:05.088 "superblock": false, 00:14:05.089 "num_base_bdevs": 3, 00:14:05.089 "num_base_bdevs_discovered": 1, 00:14:05.089 "num_base_bdevs_operational": 3, 00:14:05.089 "base_bdevs_list": [ 00:14:05.089 { 00:14:05.089 "name": "BaseBdev1", 00:14:05.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.089 "is_configured": false, 00:14:05.089 "data_offset": 0, 00:14:05.089 "data_size": 0 00:14:05.089 }, 00:14:05.089 { 00:14:05.089 "name": null, 00:14:05.089 "uuid": "c28298ac-5a27-451d-a298-5f7b011d4613", 00:14:05.089 "is_configured": false, 00:14:05.089 "data_offset": 0, 00:14:05.089 "data_size": 65536 00:14:05.089 }, 00:14:05.089 { 00:14:05.089 "name": "BaseBdev3", 00:14:05.089 "uuid": "9e623c16-07a4-4ecc-8e33-3389955256bd", 00:14:05.089 "is_configured": true, 00:14:05.089 "data_offset": 0, 00:14:05.089 "data_size": 65536 00:14:05.089 } 00:14:05.089 ] 00:14:05.089 }' 00:14:05.089 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.089 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.659 [2024-11-28 02:29:39.166240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.659 BaseBdev1 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.659 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.659 [ 00:14:05.659 { 00:14:05.659 "name": "BaseBdev1", 00:14:05.659 "aliases": [ 00:14:05.660 "713c198c-a7d8-4d89-b365-9b61d63f6e4d" 00:14:05.660 ], 00:14:05.660 "product_name": "Malloc disk", 00:14:05.660 "block_size": 512, 00:14:05.660 "num_blocks": 65536, 00:14:05.660 "uuid": "713c198c-a7d8-4d89-b365-9b61d63f6e4d", 00:14:05.660 "assigned_rate_limits": { 00:14:05.660 "rw_ios_per_sec": 0, 00:14:05.660 "rw_mbytes_per_sec": 0, 00:14:05.660 "r_mbytes_per_sec": 0, 00:14:05.660 "w_mbytes_per_sec": 0 00:14:05.660 }, 00:14:05.660 "claimed": true, 00:14:05.660 "claim_type": "exclusive_write", 00:14:05.660 "zoned": false, 00:14:05.660 "supported_io_types": { 00:14:05.660 "read": true, 00:14:05.660 "write": true, 00:14:05.660 "unmap": true, 00:14:05.660 "flush": true, 00:14:05.660 "reset": true, 00:14:05.660 "nvme_admin": false, 00:14:05.660 "nvme_io": false, 00:14:05.660 "nvme_io_md": false, 00:14:05.660 "write_zeroes": true, 00:14:05.660 "zcopy": true, 00:14:05.660 "get_zone_info": false, 00:14:05.660 "zone_management": false, 00:14:05.660 "zone_append": false, 00:14:05.660 "compare": false, 00:14:05.660 "compare_and_write": false, 00:14:05.660 "abort": true, 00:14:05.660 "seek_hole": false, 00:14:05.660 "seek_data": false, 00:14:05.660 "copy": true, 00:14:05.660 "nvme_iov_md": false 00:14:05.660 }, 00:14:05.660 "memory_domains": [ 00:14:05.660 { 00:14:05.660 "dma_device_id": "system", 00:14:05.660 "dma_device_type": 1 00:14:05.660 }, 00:14:05.660 { 00:14:05.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.660 "dma_device_type": 2 00:14:05.660 } 00:14:05.660 ], 00:14:05.660 "driver_specific": {} 00:14:05.660 } 00:14:05.660 ] 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.660 "name": "Existed_Raid", 00:14:05.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.660 "strip_size_kb": 64, 00:14:05.660 "state": "configuring", 00:14:05.660 "raid_level": "raid5f", 00:14:05.660 "superblock": false, 00:14:05.660 "num_base_bdevs": 3, 00:14:05.660 "num_base_bdevs_discovered": 2, 00:14:05.660 "num_base_bdevs_operational": 3, 00:14:05.660 "base_bdevs_list": [ 00:14:05.660 { 00:14:05.660 "name": "BaseBdev1", 00:14:05.660 "uuid": "713c198c-a7d8-4d89-b365-9b61d63f6e4d", 00:14:05.660 "is_configured": true, 00:14:05.660 "data_offset": 0, 00:14:05.660 "data_size": 65536 00:14:05.660 }, 00:14:05.660 { 00:14:05.660 "name": null, 00:14:05.660 "uuid": "c28298ac-5a27-451d-a298-5f7b011d4613", 00:14:05.660 "is_configured": false, 00:14:05.660 "data_offset": 0, 00:14:05.660 "data_size": 65536 00:14:05.660 }, 00:14:05.660 { 00:14:05.660 "name": "BaseBdev3", 00:14:05.660 "uuid": "9e623c16-07a4-4ecc-8e33-3389955256bd", 00:14:05.660 "is_configured": true, 00:14:05.660 "data_offset": 0, 00:14:05.660 "data_size": 65536 00:14:05.660 } 00:14:05.660 ] 00:14:05.660 }' 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.660 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.230 [2024-11-28 02:29:39.713388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.230 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.230 "name": "Existed_Raid", 00:14:06.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.230 "strip_size_kb": 64, 00:14:06.230 "state": "configuring", 00:14:06.230 "raid_level": "raid5f", 00:14:06.230 "superblock": false, 00:14:06.230 "num_base_bdevs": 3, 00:14:06.230 "num_base_bdevs_discovered": 1, 00:14:06.230 "num_base_bdevs_operational": 3, 00:14:06.230 "base_bdevs_list": [ 00:14:06.230 { 00:14:06.230 "name": "BaseBdev1", 00:14:06.230 "uuid": "713c198c-a7d8-4d89-b365-9b61d63f6e4d", 00:14:06.230 "is_configured": true, 00:14:06.230 "data_offset": 0, 00:14:06.230 "data_size": 65536 00:14:06.230 }, 00:14:06.230 { 00:14:06.230 "name": null, 00:14:06.230 "uuid": "c28298ac-5a27-451d-a298-5f7b011d4613", 00:14:06.230 "is_configured": false, 00:14:06.230 "data_offset": 0, 00:14:06.230 "data_size": 65536 00:14:06.230 }, 00:14:06.230 { 00:14:06.230 "name": null, 00:14:06.230 "uuid": "9e623c16-07a4-4ecc-8e33-3389955256bd", 00:14:06.230 "is_configured": false, 00:14:06.231 "data_offset": 0, 00:14:06.231 "data_size": 65536 00:14:06.231 } 00:14:06.231 ] 00:14:06.231 }' 00:14:06.231 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.231 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.491 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.491 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:06.491 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.491 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.491 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.751 [2024-11-28 02:29:40.196586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.751 "name": "Existed_Raid", 00:14:06.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.751 "strip_size_kb": 64, 00:14:06.751 "state": "configuring", 00:14:06.751 "raid_level": "raid5f", 00:14:06.751 "superblock": false, 00:14:06.751 "num_base_bdevs": 3, 00:14:06.751 "num_base_bdevs_discovered": 2, 00:14:06.751 "num_base_bdevs_operational": 3, 00:14:06.751 "base_bdevs_list": [ 00:14:06.751 { 00:14:06.751 "name": "BaseBdev1", 00:14:06.751 "uuid": "713c198c-a7d8-4d89-b365-9b61d63f6e4d", 00:14:06.751 "is_configured": true, 00:14:06.751 "data_offset": 0, 00:14:06.751 "data_size": 65536 00:14:06.751 }, 00:14:06.751 { 00:14:06.751 "name": null, 00:14:06.751 "uuid": "c28298ac-5a27-451d-a298-5f7b011d4613", 00:14:06.751 "is_configured": false, 00:14:06.751 "data_offset": 0, 00:14:06.751 "data_size": 65536 00:14:06.751 }, 00:14:06.751 { 00:14:06.751 "name": "BaseBdev3", 00:14:06.751 "uuid": "9e623c16-07a4-4ecc-8e33-3389955256bd", 00:14:06.751 "is_configured": true, 00:14:06.751 "data_offset": 0, 00:14:06.751 "data_size": 65536 00:14:06.751 } 00:14:06.751 ] 00:14:06.751 }' 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.751 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.012 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.012 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.012 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.012 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:07.012 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.012 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:07.012 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:07.012 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.012 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.012 [2024-11-28 02:29:40.679776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.272 "name": "Existed_Raid", 00:14:07.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.272 "strip_size_kb": 64, 00:14:07.272 "state": "configuring", 00:14:07.272 "raid_level": "raid5f", 00:14:07.272 "superblock": false, 00:14:07.272 "num_base_bdevs": 3, 00:14:07.272 "num_base_bdevs_discovered": 1, 00:14:07.272 "num_base_bdevs_operational": 3, 00:14:07.272 "base_bdevs_list": [ 00:14:07.272 { 00:14:07.272 "name": null, 00:14:07.272 "uuid": "713c198c-a7d8-4d89-b365-9b61d63f6e4d", 00:14:07.272 "is_configured": false, 00:14:07.272 "data_offset": 0, 00:14:07.272 "data_size": 65536 00:14:07.272 }, 00:14:07.272 { 00:14:07.272 "name": null, 00:14:07.272 "uuid": "c28298ac-5a27-451d-a298-5f7b011d4613", 00:14:07.272 "is_configured": false, 00:14:07.272 "data_offset": 0, 00:14:07.272 "data_size": 65536 00:14:07.272 }, 00:14:07.272 { 00:14:07.272 "name": "BaseBdev3", 00:14:07.272 "uuid": "9e623c16-07a4-4ecc-8e33-3389955256bd", 00:14:07.272 "is_configured": true, 00:14:07.272 "data_offset": 0, 00:14:07.272 "data_size": 65536 00:14:07.272 } 00:14:07.272 ] 00:14:07.272 }' 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.272 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.562 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.562 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.562 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.562 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:07.562 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.822 [2024-11-28 02:29:41.272042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.822 "name": "Existed_Raid", 00:14:07.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.822 "strip_size_kb": 64, 00:14:07.822 "state": "configuring", 00:14:07.822 "raid_level": "raid5f", 00:14:07.822 "superblock": false, 00:14:07.822 "num_base_bdevs": 3, 00:14:07.822 "num_base_bdevs_discovered": 2, 00:14:07.822 "num_base_bdevs_operational": 3, 00:14:07.822 "base_bdevs_list": [ 00:14:07.822 { 00:14:07.822 "name": null, 00:14:07.822 "uuid": "713c198c-a7d8-4d89-b365-9b61d63f6e4d", 00:14:07.822 "is_configured": false, 00:14:07.822 "data_offset": 0, 00:14:07.822 "data_size": 65536 00:14:07.822 }, 00:14:07.822 { 00:14:07.822 "name": "BaseBdev2", 00:14:07.822 "uuid": "c28298ac-5a27-451d-a298-5f7b011d4613", 00:14:07.822 "is_configured": true, 00:14:07.822 "data_offset": 0, 00:14:07.822 "data_size": 65536 00:14:07.822 }, 00:14:07.822 { 00:14:07.822 "name": "BaseBdev3", 00:14:07.822 "uuid": "9e623c16-07a4-4ecc-8e33-3389955256bd", 00:14:07.822 "is_configured": true, 00:14:07.822 "data_offset": 0, 00:14:07.822 "data_size": 65536 00:14:07.822 } 00:14:07.822 ] 00:14:07.822 }' 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.822 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.082 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.362 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 713c198c-a7d8-4d89-b365-9b61d63f6e4d 00:14:08.362 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.362 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.362 [2024-11-28 02:29:41.799601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:08.362 [2024-11-28 02:29:41.799732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:08.362 [2024-11-28 02:29:41.799749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:08.362 [2024-11-28 02:29:41.800052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:08.362 [2024-11-28 02:29:41.805303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:08.362 [2024-11-28 02:29:41.805364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:08.362 [2024-11-28 02:29:41.805724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.362 NewBaseBdev 00:14:08.362 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.362 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:08.362 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:08.362 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:08.362 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.363 [ 00:14:08.363 { 00:14:08.363 "name": "NewBaseBdev", 00:14:08.363 "aliases": [ 00:14:08.363 "713c198c-a7d8-4d89-b365-9b61d63f6e4d" 00:14:08.363 ], 00:14:08.363 "product_name": "Malloc disk", 00:14:08.363 "block_size": 512, 00:14:08.363 "num_blocks": 65536, 00:14:08.363 "uuid": "713c198c-a7d8-4d89-b365-9b61d63f6e4d", 00:14:08.363 "assigned_rate_limits": { 00:14:08.363 "rw_ios_per_sec": 0, 00:14:08.363 "rw_mbytes_per_sec": 0, 00:14:08.363 "r_mbytes_per_sec": 0, 00:14:08.363 "w_mbytes_per_sec": 0 00:14:08.363 }, 00:14:08.363 "claimed": true, 00:14:08.363 "claim_type": "exclusive_write", 00:14:08.363 "zoned": false, 00:14:08.363 "supported_io_types": { 00:14:08.363 "read": true, 00:14:08.363 "write": true, 00:14:08.363 "unmap": true, 00:14:08.363 "flush": true, 00:14:08.363 "reset": true, 00:14:08.363 "nvme_admin": false, 00:14:08.363 "nvme_io": false, 00:14:08.363 "nvme_io_md": false, 00:14:08.363 "write_zeroes": true, 00:14:08.363 "zcopy": true, 00:14:08.363 "get_zone_info": false, 00:14:08.363 "zone_management": false, 00:14:08.363 "zone_append": false, 00:14:08.363 "compare": false, 00:14:08.363 "compare_and_write": false, 00:14:08.363 "abort": true, 00:14:08.363 "seek_hole": false, 00:14:08.363 "seek_data": false, 00:14:08.363 "copy": true, 00:14:08.363 "nvme_iov_md": false 00:14:08.363 }, 00:14:08.363 "memory_domains": [ 00:14:08.363 { 00:14:08.363 "dma_device_id": "system", 00:14:08.363 "dma_device_type": 1 00:14:08.363 }, 00:14:08.363 { 00:14:08.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.363 "dma_device_type": 2 00:14:08.363 } 00:14:08.363 ], 00:14:08.363 "driver_specific": {} 00:14:08.363 } 00:14:08.363 ] 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.363 "name": "Existed_Raid", 00:14:08.363 "uuid": "f989d41b-6dbd-4dc0-8279-19c64fb55289", 00:14:08.363 "strip_size_kb": 64, 00:14:08.363 "state": "online", 00:14:08.363 "raid_level": "raid5f", 00:14:08.363 "superblock": false, 00:14:08.363 "num_base_bdevs": 3, 00:14:08.363 "num_base_bdevs_discovered": 3, 00:14:08.363 "num_base_bdevs_operational": 3, 00:14:08.363 "base_bdevs_list": [ 00:14:08.363 { 00:14:08.363 "name": "NewBaseBdev", 00:14:08.363 "uuid": "713c198c-a7d8-4d89-b365-9b61d63f6e4d", 00:14:08.363 "is_configured": true, 00:14:08.363 "data_offset": 0, 00:14:08.363 "data_size": 65536 00:14:08.363 }, 00:14:08.363 { 00:14:08.363 "name": "BaseBdev2", 00:14:08.363 "uuid": "c28298ac-5a27-451d-a298-5f7b011d4613", 00:14:08.363 "is_configured": true, 00:14:08.363 "data_offset": 0, 00:14:08.363 "data_size": 65536 00:14:08.363 }, 00:14:08.363 { 00:14:08.363 "name": "BaseBdev3", 00:14:08.363 "uuid": "9e623c16-07a4-4ecc-8e33-3389955256bd", 00:14:08.363 "is_configured": true, 00:14:08.363 "data_offset": 0, 00:14:08.363 "data_size": 65536 00:14:08.363 } 00:14:08.363 ] 00:14:08.363 }' 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.363 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.624 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.624 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:08.624 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:08.624 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:08.624 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:08.624 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:08.624 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:08.624 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:08.624 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.624 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.884 [2024-11-28 02:29:42.307518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:08.884 "name": "Existed_Raid", 00:14:08.884 "aliases": [ 00:14:08.884 "f989d41b-6dbd-4dc0-8279-19c64fb55289" 00:14:08.884 ], 00:14:08.884 "product_name": "Raid Volume", 00:14:08.884 "block_size": 512, 00:14:08.884 "num_blocks": 131072, 00:14:08.884 "uuid": "f989d41b-6dbd-4dc0-8279-19c64fb55289", 00:14:08.884 "assigned_rate_limits": { 00:14:08.884 "rw_ios_per_sec": 0, 00:14:08.884 "rw_mbytes_per_sec": 0, 00:14:08.884 "r_mbytes_per_sec": 0, 00:14:08.884 "w_mbytes_per_sec": 0 00:14:08.884 }, 00:14:08.884 "claimed": false, 00:14:08.884 "zoned": false, 00:14:08.884 "supported_io_types": { 00:14:08.884 "read": true, 00:14:08.884 "write": true, 00:14:08.884 "unmap": false, 00:14:08.884 "flush": false, 00:14:08.884 "reset": true, 00:14:08.884 "nvme_admin": false, 00:14:08.884 "nvme_io": false, 00:14:08.884 "nvme_io_md": false, 00:14:08.884 "write_zeroes": true, 00:14:08.884 "zcopy": false, 00:14:08.884 "get_zone_info": false, 00:14:08.884 "zone_management": false, 00:14:08.884 "zone_append": false, 00:14:08.884 "compare": false, 00:14:08.884 "compare_and_write": false, 00:14:08.884 "abort": false, 00:14:08.884 "seek_hole": false, 00:14:08.884 "seek_data": false, 00:14:08.884 "copy": false, 00:14:08.884 "nvme_iov_md": false 00:14:08.884 }, 00:14:08.884 "driver_specific": { 00:14:08.884 "raid": { 00:14:08.884 "uuid": "f989d41b-6dbd-4dc0-8279-19c64fb55289", 00:14:08.884 "strip_size_kb": 64, 00:14:08.884 "state": "online", 00:14:08.884 "raid_level": "raid5f", 00:14:08.884 "superblock": false, 00:14:08.884 "num_base_bdevs": 3, 00:14:08.884 "num_base_bdevs_discovered": 3, 00:14:08.884 "num_base_bdevs_operational": 3, 00:14:08.884 "base_bdevs_list": [ 00:14:08.884 { 00:14:08.884 "name": "NewBaseBdev", 00:14:08.884 "uuid": "713c198c-a7d8-4d89-b365-9b61d63f6e4d", 00:14:08.884 "is_configured": true, 00:14:08.884 "data_offset": 0, 00:14:08.884 "data_size": 65536 00:14:08.884 }, 00:14:08.884 { 00:14:08.884 "name": "BaseBdev2", 00:14:08.884 "uuid": "c28298ac-5a27-451d-a298-5f7b011d4613", 00:14:08.884 "is_configured": true, 00:14:08.884 "data_offset": 0, 00:14:08.884 "data_size": 65536 00:14:08.884 }, 00:14:08.884 { 00:14:08.884 "name": "BaseBdev3", 00:14:08.884 "uuid": "9e623c16-07a4-4ecc-8e33-3389955256bd", 00:14:08.884 "is_configured": true, 00:14:08.884 "data_offset": 0, 00:14:08.884 "data_size": 65536 00:14:08.884 } 00:14:08.884 ] 00:14:08.884 } 00:14:08.884 } 00:14:08.884 }' 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:08.884 BaseBdev2 00:14:08.884 BaseBdev3' 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.884 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.144 [2024-11-28 02:29:42.598810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.144 [2024-11-28 02:29:42.598839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.144 [2024-11-28 02:29:42.598913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.144 [2024-11-28 02:29:42.599235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.144 [2024-11-28 02:29:42.599250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79654 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79654 ']' 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79654 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79654 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79654' 00:14:09.144 killing process with pid 79654 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79654 00:14:09.144 [2024-11-28 02:29:42.636771] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.144 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79654 00:14:09.454 [2024-11-28 02:29:42.923690] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.393 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:10.393 00:14:10.393 real 0m10.450s 00:14:10.393 user 0m16.640s 00:14:10.393 sys 0m1.855s 00:14:10.393 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.393 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.393 ************************************ 00:14:10.393 END TEST raid5f_state_function_test 00:14:10.393 ************************************ 00:14:10.393 02:29:44 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:10.393 02:29:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:10.393 02:29:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.393 02:29:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.393 ************************************ 00:14:10.393 START TEST raid5f_state_function_test_sb 00:14:10.393 ************************************ 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:10.393 Process raid pid: 80281 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80281 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80281' 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80281 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80281 ']' 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.393 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.653 [2024-11-28 02:29:44.124814] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:10.653 [2024-11-28 02:29:44.125053] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.653 [2024-11-28 02:29:44.298537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.912 [2024-11-28 02:29:44.405087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.173 [2024-11-28 02:29:44.600413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.173 [2024-11-28 02:29:44.600466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.434 [2024-11-28 02:29:44.949615] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:11.434 [2024-11-28 02:29:44.949677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:11.434 [2024-11-28 02:29:44.949689] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.434 [2024-11-28 02:29:44.949701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.434 [2024-11-28 02:29:44.949714] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:11.434 [2024-11-28 02:29:44.949727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.434 02:29:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.434 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.434 "name": "Existed_Raid", 00:14:11.434 "uuid": "08da279f-1fef-4424-bacc-524f43d15866", 00:14:11.434 "strip_size_kb": 64, 00:14:11.434 "state": "configuring", 00:14:11.434 "raid_level": "raid5f", 00:14:11.434 "superblock": true, 00:14:11.434 "num_base_bdevs": 3, 00:14:11.434 "num_base_bdevs_discovered": 0, 00:14:11.434 "num_base_bdevs_operational": 3, 00:14:11.434 "base_bdevs_list": [ 00:14:11.434 { 00:14:11.434 "name": "BaseBdev1", 00:14:11.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.434 "is_configured": false, 00:14:11.434 "data_offset": 0, 00:14:11.434 "data_size": 0 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "name": "BaseBdev2", 00:14:11.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.434 "is_configured": false, 00:14:11.434 "data_offset": 0, 00:14:11.434 "data_size": 0 00:14:11.434 }, 00:14:11.434 { 00:14:11.434 "name": "BaseBdev3", 00:14:11.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.434 "is_configured": false, 00:14:11.434 "data_offset": 0, 00:14:11.434 "data_size": 0 00:14:11.434 } 00:14:11.434 ] 00:14:11.434 }' 00:14:11.434 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.434 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.001 [2024-11-28 02:29:45.428748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.001 [2024-11-28 02:29:45.428842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.001 [2024-11-28 02:29:45.440720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.001 [2024-11-28 02:29:45.440813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.001 [2024-11-28 02:29:45.440846] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.001 [2024-11-28 02:29:45.440875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.001 [2024-11-28 02:29:45.440897] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:12.001 [2024-11-28 02:29:45.440940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.001 [2024-11-28 02:29:45.487057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.001 BaseBdev1 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.001 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.001 [ 00:14:12.001 { 00:14:12.001 "name": "BaseBdev1", 00:14:12.001 "aliases": [ 00:14:12.001 "4ddf1109-6374-4c40-9e53-0c20057bd921" 00:14:12.001 ], 00:14:12.001 "product_name": "Malloc disk", 00:14:12.001 "block_size": 512, 00:14:12.001 "num_blocks": 65536, 00:14:12.001 "uuid": "4ddf1109-6374-4c40-9e53-0c20057bd921", 00:14:12.001 "assigned_rate_limits": { 00:14:12.001 "rw_ios_per_sec": 0, 00:14:12.001 "rw_mbytes_per_sec": 0, 00:14:12.001 "r_mbytes_per_sec": 0, 00:14:12.001 "w_mbytes_per_sec": 0 00:14:12.001 }, 00:14:12.001 "claimed": true, 00:14:12.001 "claim_type": "exclusive_write", 00:14:12.001 "zoned": false, 00:14:12.001 "supported_io_types": { 00:14:12.001 "read": true, 00:14:12.001 "write": true, 00:14:12.001 "unmap": true, 00:14:12.001 "flush": true, 00:14:12.002 "reset": true, 00:14:12.002 "nvme_admin": false, 00:14:12.002 "nvme_io": false, 00:14:12.002 "nvme_io_md": false, 00:14:12.002 "write_zeroes": true, 00:14:12.002 "zcopy": true, 00:14:12.002 "get_zone_info": false, 00:14:12.002 "zone_management": false, 00:14:12.002 "zone_append": false, 00:14:12.002 "compare": false, 00:14:12.002 "compare_and_write": false, 00:14:12.002 "abort": true, 00:14:12.002 "seek_hole": false, 00:14:12.002 "seek_data": false, 00:14:12.002 "copy": true, 00:14:12.002 "nvme_iov_md": false 00:14:12.002 }, 00:14:12.002 "memory_domains": [ 00:14:12.002 { 00:14:12.002 "dma_device_id": "system", 00:14:12.002 "dma_device_type": 1 00:14:12.002 }, 00:14:12.002 { 00:14:12.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.002 "dma_device_type": 2 00:14:12.002 } 00:14:12.002 ], 00:14:12.002 "driver_specific": {} 00:14:12.002 } 00:14:12.002 ] 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.002 "name": "Existed_Raid", 00:14:12.002 "uuid": "431af631-9ab0-4eed-beeb-c9b7039fc3c3", 00:14:12.002 "strip_size_kb": 64, 00:14:12.002 "state": "configuring", 00:14:12.002 "raid_level": "raid5f", 00:14:12.002 "superblock": true, 00:14:12.002 "num_base_bdevs": 3, 00:14:12.002 "num_base_bdevs_discovered": 1, 00:14:12.002 "num_base_bdevs_operational": 3, 00:14:12.002 "base_bdevs_list": [ 00:14:12.002 { 00:14:12.002 "name": "BaseBdev1", 00:14:12.002 "uuid": "4ddf1109-6374-4c40-9e53-0c20057bd921", 00:14:12.002 "is_configured": true, 00:14:12.002 "data_offset": 2048, 00:14:12.002 "data_size": 63488 00:14:12.002 }, 00:14:12.002 { 00:14:12.002 "name": "BaseBdev2", 00:14:12.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.002 "is_configured": false, 00:14:12.002 "data_offset": 0, 00:14:12.002 "data_size": 0 00:14:12.002 }, 00:14:12.002 { 00:14:12.002 "name": "BaseBdev3", 00:14:12.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.002 "is_configured": false, 00:14:12.002 "data_offset": 0, 00:14:12.002 "data_size": 0 00:14:12.002 } 00:14:12.002 ] 00:14:12.002 }' 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.002 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.262 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.262 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.262 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.262 [2024-11-28 02:29:45.926340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.262 [2024-11-28 02:29:45.926440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:12.262 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.262 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:12.262 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.262 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.262 [2024-11-28 02:29:45.938382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.262 [2024-11-28 02:29:45.940218] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.262 [2024-11-28 02:29:45.940305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.262 [2024-11-28 02:29:45.940355] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:12.262 [2024-11-28 02:29:45.940383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.522 "name": "Existed_Raid", 00:14:12.522 "uuid": "d4f61068-0157-474a-82a0-e3af05c51bad", 00:14:12.522 "strip_size_kb": 64, 00:14:12.522 "state": "configuring", 00:14:12.522 "raid_level": "raid5f", 00:14:12.522 "superblock": true, 00:14:12.522 "num_base_bdevs": 3, 00:14:12.522 "num_base_bdevs_discovered": 1, 00:14:12.522 "num_base_bdevs_operational": 3, 00:14:12.522 "base_bdevs_list": [ 00:14:12.522 { 00:14:12.522 "name": "BaseBdev1", 00:14:12.522 "uuid": "4ddf1109-6374-4c40-9e53-0c20057bd921", 00:14:12.522 "is_configured": true, 00:14:12.522 "data_offset": 2048, 00:14:12.522 "data_size": 63488 00:14:12.522 }, 00:14:12.522 { 00:14:12.522 "name": "BaseBdev2", 00:14:12.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.522 "is_configured": false, 00:14:12.522 "data_offset": 0, 00:14:12.522 "data_size": 0 00:14:12.522 }, 00:14:12.522 { 00:14:12.522 "name": "BaseBdev3", 00:14:12.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.522 "is_configured": false, 00:14:12.522 "data_offset": 0, 00:14:12.522 "data_size": 0 00:14:12.522 } 00:14:12.522 ] 00:14:12.522 }' 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.522 02:29:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.782 [2024-11-28 02:29:46.452785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.782 BaseBdev2 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.782 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.043 [ 00:14:13.043 { 00:14:13.043 "name": "BaseBdev2", 00:14:13.043 "aliases": [ 00:14:13.043 "37f63485-a982-49d1-8a59-c4c8c2c61b9e" 00:14:13.043 ], 00:14:13.043 "product_name": "Malloc disk", 00:14:13.043 "block_size": 512, 00:14:13.043 "num_blocks": 65536, 00:14:13.043 "uuid": "37f63485-a982-49d1-8a59-c4c8c2c61b9e", 00:14:13.043 "assigned_rate_limits": { 00:14:13.043 "rw_ios_per_sec": 0, 00:14:13.043 "rw_mbytes_per_sec": 0, 00:14:13.043 "r_mbytes_per_sec": 0, 00:14:13.043 "w_mbytes_per_sec": 0 00:14:13.043 }, 00:14:13.043 "claimed": true, 00:14:13.043 "claim_type": "exclusive_write", 00:14:13.043 "zoned": false, 00:14:13.043 "supported_io_types": { 00:14:13.043 "read": true, 00:14:13.043 "write": true, 00:14:13.043 "unmap": true, 00:14:13.043 "flush": true, 00:14:13.043 "reset": true, 00:14:13.043 "nvme_admin": false, 00:14:13.043 "nvme_io": false, 00:14:13.043 "nvme_io_md": false, 00:14:13.043 "write_zeroes": true, 00:14:13.043 "zcopy": true, 00:14:13.043 "get_zone_info": false, 00:14:13.043 "zone_management": false, 00:14:13.043 "zone_append": false, 00:14:13.043 "compare": false, 00:14:13.043 "compare_and_write": false, 00:14:13.043 "abort": true, 00:14:13.043 "seek_hole": false, 00:14:13.043 "seek_data": false, 00:14:13.043 "copy": true, 00:14:13.043 "nvme_iov_md": false 00:14:13.043 }, 00:14:13.043 "memory_domains": [ 00:14:13.043 { 00:14:13.043 "dma_device_id": "system", 00:14:13.043 "dma_device_type": 1 00:14:13.043 }, 00:14:13.043 { 00:14:13.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.043 "dma_device_type": 2 00:14:13.043 } 00:14:13.043 ], 00:14:13.043 "driver_specific": {} 00:14:13.043 } 00:14:13.043 ] 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.043 "name": "Existed_Raid", 00:14:13.043 "uuid": "d4f61068-0157-474a-82a0-e3af05c51bad", 00:14:13.043 "strip_size_kb": 64, 00:14:13.043 "state": "configuring", 00:14:13.043 "raid_level": "raid5f", 00:14:13.043 "superblock": true, 00:14:13.043 "num_base_bdevs": 3, 00:14:13.043 "num_base_bdevs_discovered": 2, 00:14:13.043 "num_base_bdevs_operational": 3, 00:14:13.043 "base_bdevs_list": [ 00:14:13.043 { 00:14:13.043 "name": "BaseBdev1", 00:14:13.043 "uuid": "4ddf1109-6374-4c40-9e53-0c20057bd921", 00:14:13.043 "is_configured": true, 00:14:13.043 "data_offset": 2048, 00:14:13.043 "data_size": 63488 00:14:13.043 }, 00:14:13.043 { 00:14:13.043 "name": "BaseBdev2", 00:14:13.043 "uuid": "37f63485-a982-49d1-8a59-c4c8c2c61b9e", 00:14:13.043 "is_configured": true, 00:14:13.043 "data_offset": 2048, 00:14:13.043 "data_size": 63488 00:14:13.043 }, 00:14:13.043 { 00:14:13.043 "name": "BaseBdev3", 00:14:13.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.043 "is_configured": false, 00:14:13.043 "data_offset": 0, 00:14:13.043 "data_size": 0 00:14:13.043 } 00:14:13.043 ] 00:14:13.043 }' 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.043 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.303 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:13.303 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.303 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.562 [2024-11-28 02:29:46.982692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.562 [2024-11-28 02:29:46.983092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:13.562 [2024-11-28 02:29:46.983162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:13.562 [2024-11-28 02:29:46.983470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:13.562 BaseBdev3 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.562 [2024-11-28 02:29:46.989029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:13.562 [2024-11-28 02:29:46.989054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:13.562 [2024-11-28 02:29:46.989248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:13.562 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.562 [ 00:14:13.562 { 00:14:13.562 "name": "BaseBdev3", 00:14:13.562 "aliases": [ 00:14:13.562 "7eaae07e-8a91-496c-a524-a2013cf133f8" 00:14:13.562 ], 00:14:13.562 "product_name": "Malloc disk", 00:14:13.562 "block_size": 512, 00:14:13.562 "num_blocks": 65536, 00:14:13.562 "uuid": "7eaae07e-8a91-496c-a524-a2013cf133f8", 00:14:13.562 "assigned_rate_limits": { 00:14:13.562 "rw_ios_per_sec": 0, 00:14:13.562 "rw_mbytes_per_sec": 0, 00:14:13.562 "r_mbytes_per_sec": 0, 00:14:13.562 "w_mbytes_per_sec": 0 00:14:13.562 }, 00:14:13.562 "claimed": true, 00:14:13.562 "claim_type": "exclusive_write", 00:14:13.562 "zoned": false, 00:14:13.562 "supported_io_types": { 00:14:13.562 "read": true, 00:14:13.562 "write": true, 00:14:13.562 "unmap": true, 00:14:13.562 "flush": true, 00:14:13.562 "reset": true, 00:14:13.562 "nvme_admin": false, 00:14:13.562 "nvme_io": false, 00:14:13.562 "nvme_io_md": false, 00:14:13.562 "write_zeroes": true, 00:14:13.562 "zcopy": true, 00:14:13.562 "get_zone_info": false, 00:14:13.562 "zone_management": false, 00:14:13.562 "zone_append": false, 00:14:13.562 "compare": false, 00:14:13.562 "compare_and_write": false, 00:14:13.562 "abort": true, 00:14:13.562 "seek_hole": false, 00:14:13.562 "seek_data": false, 00:14:13.562 "copy": true, 00:14:13.562 "nvme_iov_md": false 00:14:13.562 }, 00:14:13.562 "memory_domains": [ 00:14:13.562 { 00:14:13.562 "dma_device_id": "system", 00:14:13.562 "dma_device_type": 1 00:14:13.562 }, 00:14:13.562 { 00:14:13.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.562 "dma_device_type": 2 00:14:13.562 } 00:14:13.562 ], 00:14:13.562 "driver_specific": {} 00:14:13.562 } 00:14:13.562 ] 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.562 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.562 "name": "Existed_Raid", 00:14:13.562 "uuid": "d4f61068-0157-474a-82a0-e3af05c51bad", 00:14:13.562 "strip_size_kb": 64, 00:14:13.562 "state": "online", 00:14:13.562 "raid_level": "raid5f", 00:14:13.562 "superblock": true, 00:14:13.562 "num_base_bdevs": 3, 00:14:13.562 "num_base_bdevs_discovered": 3, 00:14:13.563 "num_base_bdevs_operational": 3, 00:14:13.563 "base_bdevs_list": [ 00:14:13.563 { 00:14:13.563 "name": "BaseBdev1", 00:14:13.563 "uuid": "4ddf1109-6374-4c40-9e53-0c20057bd921", 00:14:13.563 "is_configured": true, 00:14:13.563 "data_offset": 2048, 00:14:13.563 "data_size": 63488 00:14:13.563 }, 00:14:13.563 { 00:14:13.563 "name": "BaseBdev2", 00:14:13.563 "uuid": "37f63485-a982-49d1-8a59-c4c8c2c61b9e", 00:14:13.563 "is_configured": true, 00:14:13.563 "data_offset": 2048, 00:14:13.563 "data_size": 63488 00:14:13.563 }, 00:14:13.563 { 00:14:13.563 "name": "BaseBdev3", 00:14:13.563 "uuid": "7eaae07e-8a91-496c-a524-a2013cf133f8", 00:14:13.563 "is_configured": true, 00:14:13.563 "data_offset": 2048, 00:14:13.563 "data_size": 63488 00:14:13.563 } 00:14:13.563 ] 00:14:13.563 }' 00:14:13.563 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.563 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.823 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:13.823 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:13.823 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:13.823 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:13.823 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:13.823 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:13.823 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:13.823 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:13.823 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.823 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.823 [2024-11-28 02:29:47.494721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.082 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.082 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:14.082 "name": "Existed_Raid", 00:14:14.082 "aliases": [ 00:14:14.082 "d4f61068-0157-474a-82a0-e3af05c51bad" 00:14:14.082 ], 00:14:14.082 "product_name": "Raid Volume", 00:14:14.082 "block_size": 512, 00:14:14.082 "num_blocks": 126976, 00:14:14.082 "uuid": "d4f61068-0157-474a-82a0-e3af05c51bad", 00:14:14.082 "assigned_rate_limits": { 00:14:14.082 "rw_ios_per_sec": 0, 00:14:14.082 "rw_mbytes_per_sec": 0, 00:14:14.082 "r_mbytes_per_sec": 0, 00:14:14.082 "w_mbytes_per_sec": 0 00:14:14.082 }, 00:14:14.082 "claimed": false, 00:14:14.082 "zoned": false, 00:14:14.082 "supported_io_types": { 00:14:14.082 "read": true, 00:14:14.082 "write": true, 00:14:14.082 "unmap": false, 00:14:14.082 "flush": false, 00:14:14.082 "reset": true, 00:14:14.082 "nvme_admin": false, 00:14:14.082 "nvme_io": false, 00:14:14.082 "nvme_io_md": false, 00:14:14.082 "write_zeroes": true, 00:14:14.082 "zcopy": false, 00:14:14.082 "get_zone_info": false, 00:14:14.082 "zone_management": false, 00:14:14.082 "zone_append": false, 00:14:14.082 "compare": false, 00:14:14.082 "compare_and_write": false, 00:14:14.082 "abort": false, 00:14:14.083 "seek_hole": false, 00:14:14.083 "seek_data": false, 00:14:14.083 "copy": false, 00:14:14.083 "nvme_iov_md": false 00:14:14.083 }, 00:14:14.083 "driver_specific": { 00:14:14.083 "raid": { 00:14:14.083 "uuid": "d4f61068-0157-474a-82a0-e3af05c51bad", 00:14:14.083 "strip_size_kb": 64, 00:14:14.083 "state": "online", 00:14:14.083 "raid_level": "raid5f", 00:14:14.083 "superblock": true, 00:14:14.083 "num_base_bdevs": 3, 00:14:14.083 "num_base_bdevs_discovered": 3, 00:14:14.083 "num_base_bdevs_operational": 3, 00:14:14.083 "base_bdevs_list": [ 00:14:14.083 { 00:14:14.083 "name": "BaseBdev1", 00:14:14.083 "uuid": "4ddf1109-6374-4c40-9e53-0c20057bd921", 00:14:14.083 "is_configured": true, 00:14:14.083 "data_offset": 2048, 00:14:14.083 "data_size": 63488 00:14:14.083 }, 00:14:14.083 { 00:14:14.083 "name": "BaseBdev2", 00:14:14.083 "uuid": "37f63485-a982-49d1-8a59-c4c8c2c61b9e", 00:14:14.083 "is_configured": true, 00:14:14.083 "data_offset": 2048, 00:14:14.083 "data_size": 63488 00:14:14.083 }, 00:14:14.083 { 00:14:14.083 "name": "BaseBdev3", 00:14:14.083 "uuid": "7eaae07e-8a91-496c-a524-a2013cf133f8", 00:14:14.083 "is_configured": true, 00:14:14.083 "data_offset": 2048, 00:14:14.083 "data_size": 63488 00:14:14.083 } 00:14:14.083 ] 00:14:14.083 } 00:14:14.083 } 00:14:14.083 }' 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:14.083 BaseBdev2 00:14:14.083 BaseBdev3' 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.083 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.343 [2024-11-28 02:29:47.770080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.343 "name": "Existed_Raid", 00:14:14.343 "uuid": "d4f61068-0157-474a-82a0-e3af05c51bad", 00:14:14.343 "strip_size_kb": 64, 00:14:14.343 "state": "online", 00:14:14.343 "raid_level": "raid5f", 00:14:14.343 "superblock": true, 00:14:14.343 "num_base_bdevs": 3, 00:14:14.343 "num_base_bdevs_discovered": 2, 00:14:14.343 "num_base_bdevs_operational": 2, 00:14:14.343 "base_bdevs_list": [ 00:14:14.343 { 00:14:14.343 "name": null, 00:14:14.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.343 "is_configured": false, 00:14:14.343 "data_offset": 0, 00:14:14.343 "data_size": 63488 00:14:14.343 }, 00:14:14.343 { 00:14:14.343 "name": "BaseBdev2", 00:14:14.343 "uuid": "37f63485-a982-49d1-8a59-c4c8c2c61b9e", 00:14:14.343 "is_configured": true, 00:14:14.343 "data_offset": 2048, 00:14:14.343 "data_size": 63488 00:14:14.343 }, 00:14:14.343 { 00:14:14.343 "name": "BaseBdev3", 00:14:14.343 "uuid": "7eaae07e-8a91-496c-a524-a2013cf133f8", 00:14:14.343 "is_configured": true, 00:14:14.343 "data_offset": 2048, 00:14:14.343 "data_size": 63488 00:14:14.343 } 00:14:14.343 ] 00:14:14.343 }' 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.343 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.915 [2024-11-28 02:29:48.399338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:14.915 [2024-11-28 02:29:48.399563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.915 [2024-11-28 02:29:48.503847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.915 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.915 [2024-11-28 02:29:48.559858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:14.915 [2024-11-28 02:29:48.560068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.176 BaseBdev2 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.176 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.176 [ 00:14:15.176 { 00:14:15.176 "name": "BaseBdev2", 00:14:15.176 "aliases": [ 00:14:15.176 "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7" 00:14:15.176 ], 00:14:15.176 "product_name": "Malloc disk", 00:14:15.176 "block_size": 512, 00:14:15.176 "num_blocks": 65536, 00:14:15.176 "uuid": "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7", 00:14:15.176 "assigned_rate_limits": { 00:14:15.176 "rw_ios_per_sec": 0, 00:14:15.176 "rw_mbytes_per_sec": 0, 00:14:15.176 "r_mbytes_per_sec": 0, 00:14:15.177 "w_mbytes_per_sec": 0 00:14:15.177 }, 00:14:15.177 "claimed": false, 00:14:15.177 "zoned": false, 00:14:15.177 "supported_io_types": { 00:14:15.177 "read": true, 00:14:15.177 "write": true, 00:14:15.177 "unmap": true, 00:14:15.177 "flush": true, 00:14:15.177 "reset": true, 00:14:15.177 "nvme_admin": false, 00:14:15.177 "nvme_io": false, 00:14:15.177 "nvme_io_md": false, 00:14:15.177 "write_zeroes": true, 00:14:15.177 "zcopy": true, 00:14:15.177 "get_zone_info": false, 00:14:15.177 "zone_management": false, 00:14:15.177 "zone_append": false, 00:14:15.177 "compare": false, 00:14:15.177 "compare_and_write": false, 00:14:15.177 "abort": true, 00:14:15.177 "seek_hole": false, 00:14:15.177 "seek_data": false, 00:14:15.177 "copy": true, 00:14:15.177 "nvme_iov_md": false 00:14:15.177 }, 00:14:15.177 "memory_domains": [ 00:14:15.177 { 00:14:15.177 "dma_device_id": "system", 00:14:15.177 "dma_device_type": 1 00:14:15.177 }, 00:14:15.177 { 00:14:15.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.177 "dma_device_type": 2 00:14:15.177 } 00:14:15.177 ], 00:14:15.177 "driver_specific": {} 00:14:15.177 } 00:14:15.177 ] 00:14:15.177 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.177 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:15.177 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:15.177 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.177 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:15.177 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.177 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.437 BaseBdev3 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.437 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.437 [ 00:14:15.437 { 00:14:15.437 "name": "BaseBdev3", 00:14:15.437 "aliases": [ 00:14:15.437 "f40ab788-f45d-4986-ab8b-5963a882dacb" 00:14:15.437 ], 00:14:15.437 "product_name": "Malloc disk", 00:14:15.437 "block_size": 512, 00:14:15.437 "num_blocks": 65536, 00:14:15.437 "uuid": "f40ab788-f45d-4986-ab8b-5963a882dacb", 00:14:15.437 "assigned_rate_limits": { 00:14:15.437 "rw_ios_per_sec": 0, 00:14:15.437 "rw_mbytes_per_sec": 0, 00:14:15.437 "r_mbytes_per_sec": 0, 00:14:15.437 "w_mbytes_per_sec": 0 00:14:15.437 }, 00:14:15.437 "claimed": false, 00:14:15.437 "zoned": false, 00:14:15.437 "supported_io_types": { 00:14:15.437 "read": true, 00:14:15.437 "write": true, 00:14:15.437 "unmap": true, 00:14:15.437 "flush": true, 00:14:15.437 "reset": true, 00:14:15.437 "nvme_admin": false, 00:14:15.437 "nvme_io": false, 00:14:15.437 "nvme_io_md": false, 00:14:15.437 "write_zeroes": true, 00:14:15.438 "zcopy": true, 00:14:15.438 "get_zone_info": false, 00:14:15.438 "zone_management": false, 00:14:15.438 "zone_append": false, 00:14:15.438 "compare": false, 00:14:15.438 "compare_and_write": false, 00:14:15.438 "abort": true, 00:14:15.438 "seek_hole": false, 00:14:15.438 "seek_data": false, 00:14:15.438 "copy": true, 00:14:15.438 "nvme_iov_md": false 00:14:15.438 }, 00:14:15.438 "memory_domains": [ 00:14:15.438 { 00:14:15.438 "dma_device_id": "system", 00:14:15.438 "dma_device_type": 1 00:14:15.438 }, 00:14:15.438 { 00:14:15.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.438 "dma_device_type": 2 00:14:15.438 } 00:14:15.438 ], 00:14:15.438 "driver_specific": {} 00:14:15.438 } 00:14:15.438 ] 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.438 [2024-11-28 02:29:48.905626] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.438 [2024-11-28 02:29:48.905760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.438 [2024-11-28 02:29:48.905806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.438 [2024-11-28 02:29:48.907948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.438 "name": "Existed_Raid", 00:14:15.438 "uuid": "57daa532-fe5b-47f1-ab57-ec11a8d16900", 00:14:15.438 "strip_size_kb": 64, 00:14:15.438 "state": "configuring", 00:14:15.438 "raid_level": "raid5f", 00:14:15.438 "superblock": true, 00:14:15.438 "num_base_bdevs": 3, 00:14:15.438 "num_base_bdevs_discovered": 2, 00:14:15.438 "num_base_bdevs_operational": 3, 00:14:15.438 "base_bdevs_list": [ 00:14:15.438 { 00:14:15.438 "name": "BaseBdev1", 00:14:15.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.438 "is_configured": false, 00:14:15.438 "data_offset": 0, 00:14:15.438 "data_size": 0 00:14:15.438 }, 00:14:15.438 { 00:14:15.438 "name": "BaseBdev2", 00:14:15.438 "uuid": "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7", 00:14:15.438 "is_configured": true, 00:14:15.438 "data_offset": 2048, 00:14:15.438 "data_size": 63488 00:14:15.438 }, 00:14:15.438 { 00:14:15.438 "name": "BaseBdev3", 00:14:15.438 "uuid": "f40ab788-f45d-4986-ab8b-5963a882dacb", 00:14:15.438 "is_configured": true, 00:14:15.438 "data_offset": 2048, 00:14:15.438 "data_size": 63488 00:14:15.438 } 00:14:15.438 ] 00:14:15.438 }' 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.438 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.698 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:15.698 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.698 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.698 [2024-11-28 02:29:49.369034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.958 "name": "Existed_Raid", 00:14:15.958 "uuid": "57daa532-fe5b-47f1-ab57-ec11a8d16900", 00:14:15.958 "strip_size_kb": 64, 00:14:15.958 "state": "configuring", 00:14:15.958 "raid_level": "raid5f", 00:14:15.958 "superblock": true, 00:14:15.958 "num_base_bdevs": 3, 00:14:15.958 "num_base_bdevs_discovered": 1, 00:14:15.958 "num_base_bdevs_operational": 3, 00:14:15.958 "base_bdevs_list": [ 00:14:15.958 { 00:14:15.958 "name": "BaseBdev1", 00:14:15.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.958 "is_configured": false, 00:14:15.958 "data_offset": 0, 00:14:15.958 "data_size": 0 00:14:15.958 }, 00:14:15.958 { 00:14:15.958 "name": null, 00:14:15.958 "uuid": "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7", 00:14:15.958 "is_configured": false, 00:14:15.958 "data_offset": 0, 00:14:15.958 "data_size": 63488 00:14:15.958 }, 00:14:15.958 { 00:14:15.958 "name": "BaseBdev3", 00:14:15.958 "uuid": "f40ab788-f45d-4986-ab8b-5963a882dacb", 00:14:15.958 "is_configured": true, 00:14:15.958 "data_offset": 2048, 00:14:15.958 "data_size": 63488 00:14:15.958 } 00:14:15.958 ] 00:14:15.958 }' 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.958 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.218 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:16.218 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.218 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.218 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.218 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.218 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:16.218 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.218 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.218 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.479 [2024-11-28 02:29:49.915904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.479 BaseBdev1 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.479 [ 00:14:16.479 { 00:14:16.479 "name": "BaseBdev1", 00:14:16.479 "aliases": [ 00:14:16.479 "f02e589e-2d6c-4945-995b-1dd883965ac6" 00:14:16.479 ], 00:14:16.479 "product_name": "Malloc disk", 00:14:16.479 "block_size": 512, 00:14:16.479 "num_blocks": 65536, 00:14:16.479 "uuid": "f02e589e-2d6c-4945-995b-1dd883965ac6", 00:14:16.479 "assigned_rate_limits": { 00:14:16.479 "rw_ios_per_sec": 0, 00:14:16.479 "rw_mbytes_per_sec": 0, 00:14:16.479 "r_mbytes_per_sec": 0, 00:14:16.479 "w_mbytes_per_sec": 0 00:14:16.479 }, 00:14:16.479 "claimed": true, 00:14:16.479 "claim_type": "exclusive_write", 00:14:16.479 "zoned": false, 00:14:16.479 "supported_io_types": { 00:14:16.479 "read": true, 00:14:16.479 "write": true, 00:14:16.479 "unmap": true, 00:14:16.479 "flush": true, 00:14:16.479 "reset": true, 00:14:16.479 "nvme_admin": false, 00:14:16.479 "nvme_io": false, 00:14:16.479 "nvme_io_md": false, 00:14:16.479 "write_zeroes": true, 00:14:16.479 "zcopy": true, 00:14:16.479 "get_zone_info": false, 00:14:16.479 "zone_management": false, 00:14:16.479 "zone_append": false, 00:14:16.479 "compare": false, 00:14:16.479 "compare_and_write": false, 00:14:16.479 "abort": true, 00:14:16.479 "seek_hole": false, 00:14:16.479 "seek_data": false, 00:14:16.479 "copy": true, 00:14:16.479 "nvme_iov_md": false 00:14:16.479 }, 00:14:16.479 "memory_domains": [ 00:14:16.479 { 00:14:16.479 "dma_device_id": "system", 00:14:16.479 "dma_device_type": 1 00:14:16.479 }, 00:14:16.479 { 00:14:16.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.479 "dma_device_type": 2 00:14:16.479 } 00:14:16.479 ], 00:14:16.479 "driver_specific": {} 00:14:16.479 } 00:14:16.479 ] 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.479 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.480 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.480 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.480 "name": "Existed_Raid", 00:14:16.480 "uuid": "57daa532-fe5b-47f1-ab57-ec11a8d16900", 00:14:16.480 "strip_size_kb": 64, 00:14:16.480 "state": "configuring", 00:14:16.480 "raid_level": "raid5f", 00:14:16.480 "superblock": true, 00:14:16.480 "num_base_bdevs": 3, 00:14:16.480 "num_base_bdevs_discovered": 2, 00:14:16.480 "num_base_bdevs_operational": 3, 00:14:16.480 "base_bdevs_list": [ 00:14:16.480 { 00:14:16.480 "name": "BaseBdev1", 00:14:16.480 "uuid": "f02e589e-2d6c-4945-995b-1dd883965ac6", 00:14:16.480 "is_configured": true, 00:14:16.480 "data_offset": 2048, 00:14:16.480 "data_size": 63488 00:14:16.480 }, 00:14:16.480 { 00:14:16.480 "name": null, 00:14:16.480 "uuid": "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7", 00:14:16.480 "is_configured": false, 00:14:16.480 "data_offset": 0, 00:14:16.480 "data_size": 63488 00:14:16.480 }, 00:14:16.480 { 00:14:16.480 "name": "BaseBdev3", 00:14:16.480 "uuid": "f40ab788-f45d-4986-ab8b-5963a882dacb", 00:14:16.480 "is_configured": true, 00:14:16.480 "data_offset": 2048, 00:14:16.480 "data_size": 63488 00:14:16.480 } 00:14:16.480 ] 00:14:16.480 }' 00:14:16.480 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.480 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.740 [2024-11-28 02:29:50.383133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.740 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.001 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.001 "name": "Existed_Raid", 00:14:17.001 "uuid": "57daa532-fe5b-47f1-ab57-ec11a8d16900", 00:14:17.001 "strip_size_kb": 64, 00:14:17.001 "state": "configuring", 00:14:17.001 "raid_level": "raid5f", 00:14:17.001 "superblock": true, 00:14:17.001 "num_base_bdevs": 3, 00:14:17.001 "num_base_bdevs_discovered": 1, 00:14:17.001 "num_base_bdevs_operational": 3, 00:14:17.001 "base_bdevs_list": [ 00:14:17.001 { 00:14:17.001 "name": "BaseBdev1", 00:14:17.001 "uuid": "f02e589e-2d6c-4945-995b-1dd883965ac6", 00:14:17.001 "is_configured": true, 00:14:17.001 "data_offset": 2048, 00:14:17.001 "data_size": 63488 00:14:17.001 }, 00:14:17.001 { 00:14:17.001 "name": null, 00:14:17.001 "uuid": "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7", 00:14:17.001 "is_configured": false, 00:14:17.001 "data_offset": 0, 00:14:17.001 "data_size": 63488 00:14:17.001 }, 00:14:17.001 { 00:14:17.001 "name": null, 00:14:17.001 "uuid": "f40ab788-f45d-4986-ab8b-5963a882dacb", 00:14:17.001 "is_configured": false, 00:14:17.001 "data_offset": 0, 00:14:17.001 "data_size": 63488 00:14:17.001 } 00:14:17.001 ] 00:14:17.001 }' 00:14:17.001 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.001 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.261 [2024-11-28 02:29:50.878351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.261 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.261 "name": "Existed_Raid", 00:14:17.261 "uuid": "57daa532-fe5b-47f1-ab57-ec11a8d16900", 00:14:17.261 "strip_size_kb": 64, 00:14:17.261 "state": "configuring", 00:14:17.261 "raid_level": "raid5f", 00:14:17.261 "superblock": true, 00:14:17.261 "num_base_bdevs": 3, 00:14:17.261 "num_base_bdevs_discovered": 2, 00:14:17.261 "num_base_bdevs_operational": 3, 00:14:17.261 "base_bdevs_list": [ 00:14:17.261 { 00:14:17.261 "name": "BaseBdev1", 00:14:17.261 "uuid": "f02e589e-2d6c-4945-995b-1dd883965ac6", 00:14:17.261 "is_configured": true, 00:14:17.261 "data_offset": 2048, 00:14:17.261 "data_size": 63488 00:14:17.261 }, 00:14:17.261 { 00:14:17.261 "name": null, 00:14:17.261 "uuid": "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7", 00:14:17.261 "is_configured": false, 00:14:17.261 "data_offset": 0, 00:14:17.261 "data_size": 63488 00:14:17.261 }, 00:14:17.261 { 00:14:17.261 "name": "BaseBdev3", 00:14:17.261 "uuid": "f40ab788-f45d-4986-ab8b-5963a882dacb", 00:14:17.261 "is_configured": true, 00:14:17.261 "data_offset": 2048, 00:14:17.261 "data_size": 63488 00:14:17.262 } 00:14:17.262 ] 00:14:17.262 }' 00:14:17.262 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.262 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.834 [2024-11-28 02:29:51.385554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.834 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.835 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.835 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.835 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.835 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.835 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.835 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.835 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.835 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.835 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.835 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.095 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.095 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.095 "name": "Existed_Raid", 00:14:18.095 "uuid": "57daa532-fe5b-47f1-ab57-ec11a8d16900", 00:14:18.095 "strip_size_kb": 64, 00:14:18.095 "state": "configuring", 00:14:18.095 "raid_level": "raid5f", 00:14:18.095 "superblock": true, 00:14:18.095 "num_base_bdevs": 3, 00:14:18.095 "num_base_bdevs_discovered": 1, 00:14:18.095 "num_base_bdevs_operational": 3, 00:14:18.095 "base_bdevs_list": [ 00:14:18.095 { 00:14:18.095 "name": null, 00:14:18.095 "uuid": "f02e589e-2d6c-4945-995b-1dd883965ac6", 00:14:18.095 "is_configured": false, 00:14:18.095 "data_offset": 0, 00:14:18.095 "data_size": 63488 00:14:18.095 }, 00:14:18.095 { 00:14:18.095 "name": null, 00:14:18.095 "uuid": "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7", 00:14:18.095 "is_configured": false, 00:14:18.095 "data_offset": 0, 00:14:18.095 "data_size": 63488 00:14:18.095 }, 00:14:18.095 { 00:14:18.095 "name": "BaseBdev3", 00:14:18.095 "uuid": "f40ab788-f45d-4986-ab8b-5963a882dacb", 00:14:18.095 "is_configured": true, 00:14:18.095 "data_offset": 2048, 00:14:18.095 "data_size": 63488 00:14:18.095 } 00:14:18.095 ] 00:14:18.095 }' 00:14:18.095 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.095 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.355 [2024-11-28 02:29:51.953492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.355 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.355 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.355 "name": "Existed_Raid", 00:14:18.355 "uuid": "57daa532-fe5b-47f1-ab57-ec11a8d16900", 00:14:18.355 "strip_size_kb": 64, 00:14:18.355 "state": "configuring", 00:14:18.355 "raid_level": "raid5f", 00:14:18.355 "superblock": true, 00:14:18.355 "num_base_bdevs": 3, 00:14:18.355 "num_base_bdevs_discovered": 2, 00:14:18.355 "num_base_bdevs_operational": 3, 00:14:18.355 "base_bdevs_list": [ 00:14:18.355 { 00:14:18.355 "name": null, 00:14:18.355 "uuid": "f02e589e-2d6c-4945-995b-1dd883965ac6", 00:14:18.355 "is_configured": false, 00:14:18.355 "data_offset": 0, 00:14:18.355 "data_size": 63488 00:14:18.355 }, 00:14:18.355 { 00:14:18.355 "name": "BaseBdev2", 00:14:18.355 "uuid": "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7", 00:14:18.355 "is_configured": true, 00:14:18.355 "data_offset": 2048, 00:14:18.355 "data_size": 63488 00:14:18.355 }, 00:14:18.355 { 00:14:18.355 "name": "BaseBdev3", 00:14:18.355 "uuid": "f40ab788-f45d-4986-ab8b-5963a882dacb", 00:14:18.355 "is_configured": true, 00:14:18.355 "data_offset": 2048, 00:14:18.355 "data_size": 63488 00:14:18.355 } 00:14:18.355 ] 00:14:18.355 }' 00:14:18.355 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.355 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f02e589e-2d6c-4945-995b-1dd883965ac6 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 [2024-11-28 02:29:52.523389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:18.926 [2024-11-28 02:29:52.523737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:18.926 [2024-11-28 02:29:52.523762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:18.926 [2024-11-28 02:29:52.524087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:18.926 NewBaseBdev 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 [2024-11-28 02:29:52.529299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:18.926 [2024-11-28 02:29:52.529323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:18.926 [2024-11-28 02:29:52.529493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 [ 00:14:18.926 { 00:14:18.926 "name": "NewBaseBdev", 00:14:18.926 "aliases": [ 00:14:18.926 "f02e589e-2d6c-4945-995b-1dd883965ac6" 00:14:18.926 ], 00:14:18.926 "product_name": "Malloc disk", 00:14:18.926 "block_size": 512, 00:14:18.926 "num_blocks": 65536, 00:14:18.926 "uuid": "f02e589e-2d6c-4945-995b-1dd883965ac6", 00:14:18.926 "assigned_rate_limits": { 00:14:18.926 "rw_ios_per_sec": 0, 00:14:18.926 "rw_mbytes_per_sec": 0, 00:14:18.926 "r_mbytes_per_sec": 0, 00:14:18.926 "w_mbytes_per_sec": 0 00:14:18.926 }, 00:14:18.926 "claimed": true, 00:14:18.926 "claim_type": "exclusive_write", 00:14:18.926 "zoned": false, 00:14:18.926 "supported_io_types": { 00:14:18.926 "read": true, 00:14:18.926 "write": true, 00:14:18.926 "unmap": true, 00:14:18.926 "flush": true, 00:14:18.926 "reset": true, 00:14:18.926 "nvme_admin": false, 00:14:18.926 "nvme_io": false, 00:14:18.926 "nvme_io_md": false, 00:14:18.926 "write_zeroes": true, 00:14:18.926 "zcopy": true, 00:14:18.926 "get_zone_info": false, 00:14:18.926 "zone_management": false, 00:14:18.926 "zone_append": false, 00:14:18.926 "compare": false, 00:14:18.926 "compare_and_write": false, 00:14:18.926 "abort": true, 00:14:18.926 "seek_hole": false, 00:14:18.926 "seek_data": false, 00:14:18.926 "copy": true, 00:14:18.926 "nvme_iov_md": false 00:14:18.926 }, 00:14:18.926 "memory_domains": [ 00:14:18.926 { 00:14:18.926 "dma_device_id": "system", 00:14:18.926 "dma_device_type": 1 00:14:18.926 }, 00:14:18.926 { 00:14:18.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.926 "dma_device_type": 2 00:14:18.926 } 00:14:18.926 ], 00:14:18.926 "driver_specific": {} 00:14:18.926 } 00:14:18.926 ] 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.186 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.186 "name": "Existed_Raid", 00:14:19.186 "uuid": "57daa532-fe5b-47f1-ab57-ec11a8d16900", 00:14:19.186 "strip_size_kb": 64, 00:14:19.186 "state": "online", 00:14:19.186 "raid_level": "raid5f", 00:14:19.186 "superblock": true, 00:14:19.186 "num_base_bdevs": 3, 00:14:19.186 "num_base_bdevs_discovered": 3, 00:14:19.186 "num_base_bdevs_operational": 3, 00:14:19.186 "base_bdevs_list": [ 00:14:19.187 { 00:14:19.187 "name": "NewBaseBdev", 00:14:19.187 "uuid": "f02e589e-2d6c-4945-995b-1dd883965ac6", 00:14:19.187 "is_configured": true, 00:14:19.187 "data_offset": 2048, 00:14:19.187 "data_size": 63488 00:14:19.187 }, 00:14:19.187 { 00:14:19.187 "name": "BaseBdev2", 00:14:19.187 "uuid": "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7", 00:14:19.187 "is_configured": true, 00:14:19.187 "data_offset": 2048, 00:14:19.187 "data_size": 63488 00:14:19.187 }, 00:14:19.187 { 00:14:19.187 "name": "BaseBdev3", 00:14:19.187 "uuid": "f40ab788-f45d-4986-ab8b-5963a882dacb", 00:14:19.187 "is_configured": true, 00:14:19.187 "data_offset": 2048, 00:14:19.187 "data_size": 63488 00:14:19.187 } 00:14:19.187 ] 00:14:19.187 }' 00:14:19.187 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.187 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.447 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:19.447 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:19.447 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:19.447 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:19.447 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:19.447 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:19.447 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:19.447 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:19.447 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.447 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.447 [2024-11-28 02:29:52.992329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.447 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.447 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:19.447 "name": "Existed_Raid", 00:14:19.447 "aliases": [ 00:14:19.447 "57daa532-fe5b-47f1-ab57-ec11a8d16900" 00:14:19.447 ], 00:14:19.447 "product_name": "Raid Volume", 00:14:19.447 "block_size": 512, 00:14:19.447 "num_blocks": 126976, 00:14:19.447 "uuid": "57daa532-fe5b-47f1-ab57-ec11a8d16900", 00:14:19.447 "assigned_rate_limits": { 00:14:19.447 "rw_ios_per_sec": 0, 00:14:19.447 "rw_mbytes_per_sec": 0, 00:14:19.447 "r_mbytes_per_sec": 0, 00:14:19.447 "w_mbytes_per_sec": 0 00:14:19.447 }, 00:14:19.447 "claimed": false, 00:14:19.447 "zoned": false, 00:14:19.447 "supported_io_types": { 00:14:19.447 "read": true, 00:14:19.447 "write": true, 00:14:19.447 "unmap": false, 00:14:19.447 "flush": false, 00:14:19.447 "reset": true, 00:14:19.447 "nvme_admin": false, 00:14:19.447 "nvme_io": false, 00:14:19.447 "nvme_io_md": false, 00:14:19.447 "write_zeroes": true, 00:14:19.447 "zcopy": false, 00:14:19.447 "get_zone_info": false, 00:14:19.447 "zone_management": false, 00:14:19.447 "zone_append": false, 00:14:19.447 "compare": false, 00:14:19.447 "compare_and_write": false, 00:14:19.447 "abort": false, 00:14:19.447 "seek_hole": false, 00:14:19.447 "seek_data": false, 00:14:19.447 "copy": false, 00:14:19.447 "nvme_iov_md": false 00:14:19.447 }, 00:14:19.447 "driver_specific": { 00:14:19.447 "raid": { 00:14:19.447 "uuid": "57daa532-fe5b-47f1-ab57-ec11a8d16900", 00:14:19.447 "strip_size_kb": 64, 00:14:19.447 "state": "online", 00:14:19.447 "raid_level": "raid5f", 00:14:19.447 "superblock": true, 00:14:19.447 "num_base_bdevs": 3, 00:14:19.447 "num_base_bdevs_discovered": 3, 00:14:19.447 "num_base_bdevs_operational": 3, 00:14:19.447 "base_bdevs_list": [ 00:14:19.447 { 00:14:19.447 "name": "NewBaseBdev", 00:14:19.447 "uuid": "f02e589e-2d6c-4945-995b-1dd883965ac6", 00:14:19.447 "is_configured": true, 00:14:19.447 "data_offset": 2048, 00:14:19.447 "data_size": 63488 00:14:19.447 }, 00:14:19.447 { 00:14:19.447 "name": "BaseBdev2", 00:14:19.447 "uuid": "b1465e6d-65f2-4acc-bd4d-d3cc284b55c7", 00:14:19.447 "is_configured": true, 00:14:19.447 "data_offset": 2048, 00:14:19.447 "data_size": 63488 00:14:19.447 }, 00:14:19.447 { 00:14:19.447 "name": "BaseBdev3", 00:14:19.448 "uuid": "f40ab788-f45d-4986-ab8b-5963a882dacb", 00:14:19.448 "is_configured": true, 00:14:19.448 "data_offset": 2048, 00:14:19.448 "data_size": 63488 00:14:19.448 } 00:14:19.448 ] 00:14:19.448 } 00:14:19.448 } 00:14:19.448 }' 00:14:19.448 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:19.448 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:19.448 BaseBdev2 00:14:19.448 BaseBdev3' 00:14:19.448 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.708 [2024-11-28 02:29:53.251596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.708 [2024-11-28 02:29:53.251642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.708 [2024-11-28 02:29:53.251726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.708 [2024-11-28 02:29:53.252076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.708 [2024-11-28 02:29:53.252095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80281 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80281 ']' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80281 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80281 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.708 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80281' 00:14:19.709 killing process with pid 80281 00:14:19.709 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80281 00:14:19.709 [2024-11-28 02:29:53.294944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.709 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80281 00:14:19.968 [2024-11-28 02:29:53.618895] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.352 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:21.352 00:14:21.352 real 0m10.799s 00:14:21.352 user 0m17.047s 00:14:21.352 sys 0m1.891s 00:14:21.352 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.352 ************************************ 00:14:21.352 END TEST raid5f_state_function_test_sb 00:14:21.352 ************************************ 00:14:21.352 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.352 02:29:54 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:21.352 02:29:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:21.352 02:29:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.352 02:29:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.352 ************************************ 00:14:21.352 START TEST raid5f_superblock_test 00:14:21.352 ************************************ 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80896 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80896 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80896 ']' 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.352 02:29:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.352 [2024-11-28 02:29:54.994452] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:21.352 [2024-11-28 02:29:54.994571] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80896 ] 00:14:21.612 [2024-11-28 02:29:55.168588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.872 [2024-11-28 02:29:55.323778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.872 [2024-11-28 02:29:55.518256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.872 [2024-11-28 02:29:55.518313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.131 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.131 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:22.131 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:22.131 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:22.131 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:22.131 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:22.131 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:22.132 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:22.132 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:22.132 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:22.132 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.392 malloc1 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.392 [2024-11-28 02:29:55.855345] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:22.392 [2024-11-28 02:29:55.855458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.392 [2024-11-28 02:29:55.855497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:22.392 [2024-11-28 02:29:55.855525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.392 [2024-11-28 02:29:55.857604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.392 [2024-11-28 02:29:55.857688] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:22.392 pt1 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.392 malloc2 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.392 [2024-11-28 02:29:55.913168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:22.392 [2024-11-28 02:29:55.913217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.392 [2024-11-28 02:29:55.913241] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:22.392 [2024-11-28 02:29:55.913249] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.392 [2024-11-28 02:29:55.915209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.392 [2024-11-28 02:29:55.915241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:22.392 pt2 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.392 malloc3 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.392 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.392 [2024-11-28 02:29:55.995859] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:22.392 [2024-11-28 02:29:55.995993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.393 [2024-11-28 02:29:55.996033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:22.393 [2024-11-28 02:29:55.996063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.393 [2024-11-28 02:29:55.998070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.393 [2024-11-28 02:29:55.998146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:22.393 pt3 00:14:22.393 02:29:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.393 [2024-11-28 02:29:56.007884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:22.393 [2024-11-28 02:29:56.009660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.393 [2024-11-28 02:29:56.009761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:22.393 [2024-11-28 02:29:56.009963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:22.393 [2024-11-28 02:29:56.010022] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:22.393 [2024-11-28 02:29:56.010254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:22.393 [2024-11-28 02:29:56.015778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:22.393 [2024-11-28 02:29:56.015830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:22.393 [2024-11-28 02:29:56.016076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.393 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.393 "name": "raid_bdev1", 00:14:22.393 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:22.393 "strip_size_kb": 64, 00:14:22.393 "state": "online", 00:14:22.393 "raid_level": "raid5f", 00:14:22.393 "superblock": true, 00:14:22.393 "num_base_bdevs": 3, 00:14:22.393 "num_base_bdevs_discovered": 3, 00:14:22.393 "num_base_bdevs_operational": 3, 00:14:22.393 "base_bdevs_list": [ 00:14:22.393 { 00:14:22.393 "name": "pt1", 00:14:22.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.393 "is_configured": true, 00:14:22.393 "data_offset": 2048, 00:14:22.393 "data_size": 63488 00:14:22.393 }, 00:14:22.393 { 00:14:22.393 "name": "pt2", 00:14:22.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.393 "is_configured": true, 00:14:22.393 "data_offset": 2048, 00:14:22.393 "data_size": 63488 00:14:22.393 }, 00:14:22.393 { 00:14:22.393 "name": "pt3", 00:14:22.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.393 "is_configured": true, 00:14:22.393 "data_offset": 2048, 00:14:22.393 "data_size": 63488 00:14:22.393 } 00:14:22.393 ] 00:14:22.393 }' 00:14:22.653 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.654 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:22.914 [2024-11-28 02:29:56.433770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:22.914 "name": "raid_bdev1", 00:14:22.914 "aliases": [ 00:14:22.914 "2f809b76-75e3-4cf7-b476-425b7daf0f72" 00:14:22.914 ], 00:14:22.914 "product_name": "Raid Volume", 00:14:22.914 "block_size": 512, 00:14:22.914 "num_blocks": 126976, 00:14:22.914 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:22.914 "assigned_rate_limits": { 00:14:22.914 "rw_ios_per_sec": 0, 00:14:22.914 "rw_mbytes_per_sec": 0, 00:14:22.914 "r_mbytes_per_sec": 0, 00:14:22.914 "w_mbytes_per_sec": 0 00:14:22.914 }, 00:14:22.914 "claimed": false, 00:14:22.914 "zoned": false, 00:14:22.914 "supported_io_types": { 00:14:22.914 "read": true, 00:14:22.914 "write": true, 00:14:22.914 "unmap": false, 00:14:22.914 "flush": false, 00:14:22.914 "reset": true, 00:14:22.914 "nvme_admin": false, 00:14:22.914 "nvme_io": false, 00:14:22.914 "nvme_io_md": false, 00:14:22.914 "write_zeroes": true, 00:14:22.914 "zcopy": false, 00:14:22.914 "get_zone_info": false, 00:14:22.914 "zone_management": false, 00:14:22.914 "zone_append": false, 00:14:22.914 "compare": false, 00:14:22.914 "compare_and_write": false, 00:14:22.914 "abort": false, 00:14:22.914 "seek_hole": false, 00:14:22.914 "seek_data": false, 00:14:22.914 "copy": false, 00:14:22.914 "nvme_iov_md": false 00:14:22.914 }, 00:14:22.914 "driver_specific": { 00:14:22.914 "raid": { 00:14:22.914 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:22.914 "strip_size_kb": 64, 00:14:22.914 "state": "online", 00:14:22.914 "raid_level": "raid5f", 00:14:22.914 "superblock": true, 00:14:22.914 "num_base_bdevs": 3, 00:14:22.914 "num_base_bdevs_discovered": 3, 00:14:22.914 "num_base_bdevs_operational": 3, 00:14:22.914 "base_bdevs_list": [ 00:14:22.914 { 00:14:22.914 "name": "pt1", 00:14:22.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.914 "is_configured": true, 00:14:22.914 "data_offset": 2048, 00:14:22.914 "data_size": 63488 00:14:22.914 }, 00:14:22.914 { 00:14:22.914 "name": "pt2", 00:14:22.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.914 "is_configured": true, 00:14:22.914 "data_offset": 2048, 00:14:22.914 "data_size": 63488 00:14:22.914 }, 00:14:22.914 { 00:14:22.914 "name": "pt3", 00:14:22.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.914 "is_configured": true, 00:14:22.914 "data_offset": 2048, 00:14:22.914 "data_size": 63488 00:14:22.914 } 00:14:22.914 ] 00:14:22.914 } 00:14:22.914 } 00:14:22.914 }' 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:22.914 pt2 00:14:22.914 pt3' 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.914 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.175 [2024-11-28 02:29:56.709202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2f809b76-75e3-4cf7-b476-425b7daf0f72 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2f809b76-75e3-4cf7-b476-425b7daf0f72 ']' 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.175 [2024-11-28 02:29:56.741022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:23.175 [2024-11-28 02:29:56.741092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.175 [2024-11-28 02:29:56.741162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.175 [2024-11-28 02:29:56.741232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.175 [2024-11-28 02:29:56.741241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:23.175 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.176 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.437 [2024-11-28 02:29:56.876837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:23.437 [2024-11-28 02:29:56.878739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:23.437 [2024-11-28 02:29:56.878792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:23.437 [2024-11-28 02:29:56.878839] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:23.437 [2024-11-28 02:29:56.878885] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:23.437 [2024-11-28 02:29:56.878903] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:23.437 [2024-11-28 02:29:56.878931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:23.437 [2024-11-28 02:29:56.878949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:23.437 request: 00:14:23.437 { 00:14:23.437 "name": "raid_bdev1", 00:14:23.437 "raid_level": "raid5f", 00:14:23.437 "base_bdevs": [ 00:14:23.437 "malloc1", 00:14:23.437 "malloc2", 00:14:23.437 "malloc3" 00:14:23.437 ], 00:14:23.437 "strip_size_kb": 64, 00:14:23.437 "superblock": false, 00:14:23.437 "method": "bdev_raid_create", 00:14:23.437 "req_id": 1 00:14:23.437 } 00:14:23.437 Got JSON-RPC error response 00:14:23.437 response: 00:14:23.437 { 00:14:23.437 "code": -17, 00:14:23.437 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:23.437 } 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.437 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.438 [2024-11-28 02:29:56.944668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:23.438 [2024-11-28 02:29:56.944756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.438 [2024-11-28 02:29:56.944797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:23.438 [2024-11-28 02:29:56.944827] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.438 [2024-11-28 02:29:56.947006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.438 [2024-11-28 02:29:56.947073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:23.438 [2024-11-28 02:29:56.947163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:23.438 [2024-11-28 02:29:56.947235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:23.438 pt1 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.438 "name": "raid_bdev1", 00:14:23.438 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:23.438 "strip_size_kb": 64, 00:14:23.438 "state": "configuring", 00:14:23.438 "raid_level": "raid5f", 00:14:23.438 "superblock": true, 00:14:23.438 "num_base_bdevs": 3, 00:14:23.438 "num_base_bdevs_discovered": 1, 00:14:23.438 "num_base_bdevs_operational": 3, 00:14:23.438 "base_bdevs_list": [ 00:14:23.438 { 00:14:23.438 "name": "pt1", 00:14:23.438 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.438 "is_configured": true, 00:14:23.438 "data_offset": 2048, 00:14:23.438 "data_size": 63488 00:14:23.438 }, 00:14:23.438 { 00:14:23.438 "name": null, 00:14:23.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.438 "is_configured": false, 00:14:23.438 "data_offset": 2048, 00:14:23.438 "data_size": 63488 00:14:23.438 }, 00:14:23.438 { 00:14:23.438 "name": null, 00:14:23.438 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:23.438 "is_configured": false, 00:14:23.438 "data_offset": 2048, 00:14:23.438 "data_size": 63488 00:14:23.438 } 00:14:23.438 ] 00:14:23.438 }' 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.438 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.699 [2024-11-28 02:29:57.356066] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:23.699 [2024-11-28 02:29:57.356129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.699 [2024-11-28 02:29:57.356152] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:23.699 [2024-11-28 02:29:57.356165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.699 [2024-11-28 02:29:57.356602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.699 [2024-11-28 02:29:57.356631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:23.699 [2024-11-28 02:29:57.356716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:23.699 [2024-11-28 02:29:57.356744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:23.699 pt2 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.699 [2024-11-28 02:29:57.368067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.699 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.959 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.959 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.959 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.959 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.959 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.959 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.959 "name": "raid_bdev1", 00:14:23.959 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:23.959 "strip_size_kb": 64, 00:14:23.959 "state": "configuring", 00:14:23.959 "raid_level": "raid5f", 00:14:23.959 "superblock": true, 00:14:23.959 "num_base_bdevs": 3, 00:14:23.960 "num_base_bdevs_discovered": 1, 00:14:23.960 "num_base_bdevs_operational": 3, 00:14:23.960 "base_bdevs_list": [ 00:14:23.960 { 00:14:23.960 "name": "pt1", 00:14:23.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.960 "is_configured": true, 00:14:23.960 "data_offset": 2048, 00:14:23.960 "data_size": 63488 00:14:23.960 }, 00:14:23.960 { 00:14:23.960 "name": null, 00:14:23.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.960 "is_configured": false, 00:14:23.960 "data_offset": 0, 00:14:23.960 "data_size": 63488 00:14:23.960 }, 00:14:23.960 { 00:14:23.960 "name": null, 00:14:23.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:23.960 "is_configured": false, 00:14:23.960 "data_offset": 2048, 00:14:23.960 "data_size": 63488 00:14:23.960 } 00:14:23.960 ] 00:14:23.960 }' 00:14:23.960 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.960 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.220 [2024-11-28 02:29:57.803981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:24.220 [2024-11-28 02:29:57.804087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.220 [2024-11-28 02:29:57.804122] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:24.220 [2024-11-28 02:29:57.804151] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.220 [2024-11-28 02:29:57.804633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.220 [2024-11-28 02:29:57.804706] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:24.220 [2024-11-28 02:29:57.804818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:24.220 [2024-11-28 02:29:57.804873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:24.220 pt2 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.220 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.220 [2024-11-28 02:29:57.815949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:24.220 [2024-11-28 02:29:57.815989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.220 [2024-11-28 02:29:57.816002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:24.220 [2024-11-28 02:29:57.816012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.220 [2024-11-28 02:29:57.816350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.220 [2024-11-28 02:29:57.816370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:24.220 [2024-11-28 02:29:57.816425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:24.220 [2024-11-28 02:29:57.816443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:24.220 [2024-11-28 02:29:57.816566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:24.220 [2024-11-28 02:29:57.816579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:24.220 [2024-11-28 02:29:57.816794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:24.220 [2024-11-28 02:29:57.821844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:24.221 [2024-11-28 02:29:57.821863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:24.221 [2024-11-28 02:29:57.822078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.221 pt3 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.221 "name": "raid_bdev1", 00:14:24.221 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:24.221 "strip_size_kb": 64, 00:14:24.221 "state": "online", 00:14:24.221 "raid_level": "raid5f", 00:14:24.221 "superblock": true, 00:14:24.221 "num_base_bdevs": 3, 00:14:24.221 "num_base_bdevs_discovered": 3, 00:14:24.221 "num_base_bdevs_operational": 3, 00:14:24.221 "base_bdevs_list": [ 00:14:24.221 { 00:14:24.221 "name": "pt1", 00:14:24.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.221 "is_configured": true, 00:14:24.221 "data_offset": 2048, 00:14:24.221 "data_size": 63488 00:14:24.221 }, 00:14:24.221 { 00:14:24.221 "name": "pt2", 00:14:24.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.221 "is_configured": true, 00:14:24.221 "data_offset": 2048, 00:14:24.221 "data_size": 63488 00:14:24.221 }, 00:14:24.221 { 00:14:24.221 "name": "pt3", 00:14:24.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.221 "is_configured": true, 00:14:24.221 "data_offset": 2048, 00:14:24.221 "data_size": 63488 00:14:24.221 } 00:14:24.221 ] 00:14:24.221 }' 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.221 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 [2024-11-28 02:29:58.252124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:24.792 "name": "raid_bdev1", 00:14:24.792 "aliases": [ 00:14:24.792 "2f809b76-75e3-4cf7-b476-425b7daf0f72" 00:14:24.792 ], 00:14:24.792 "product_name": "Raid Volume", 00:14:24.792 "block_size": 512, 00:14:24.792 "num_blocks": 126976, 00:14:24.792 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:24.792 "assigned_rate_limits": { 00:14:24.792 "rw_ios_per_sec": 0, 00:14:24.792 "rw_mbytes_per_sec": 0, 00:14:24.792 "r_mbytes_per_sec": 0, 00:14:24.792 "w_mbytes_per_sec": 0 00:14:24.792 }, 00:14:24.792 "claimed": false, 00:14:24.792 "zoned": false, 00:14:24.792 "supported_io_types": { 00:14:24.792 "read": true, 00:14:24.792 "write": true, 00:14:24.792 "unmap": false, 00:14:24.792 "flush": false, 00:14:24.792 "reset": true, 00:14:24.792 "nvme_admin": false, 00:14:24.792 "nvme_io": false, 00:14:24.792 "nvme_io_md": false, 00:14:24.792 "write_zeroes": true, 00:14:24.792 "zcopy": false, 00:14:24.792 "get_zone_info": false, 00:14:24.792 "zone_management": false, 00:14:24.792 "zone_append": false, 00:14:24.792 "compare": false, 00:14:24.792 "compare_and_write": false, 00:14:24.792 "abort": false, 00:14:24.792 "seek_hole": false, 00:14:24.792 "seek_data": false, 00:14:24.792 "copy": false, 00:14:24.792 "nvme_iov_md": false 00:14:24.792 }, 00:14:24.792 "driver_specific": { 00:14:24.792 "raid": { 00:14:24.792 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:24.792 "strip_size_kb": 64, 00:14:24.792 "state": "online", 00:14:24.792 "raid_level": "raid5f", 00:14:24.792 "superblock": true, 00:14:24.792 "num_base_bdevs": 3, 00:14:24.792 "num_base_bdevs_discovered": 3, 00:14:24.792 "num_base_bdevs_operational": 3, 00:14:24.792 "base_bdevs_list": [ 00:14:24.792 { 00:14:24.792 "name": "pt1", 00:14:24.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.792 "is_configured": true, 00:14:24.792 "data_offset": 2048, 00:14:24.792 "data_size": 63488 00:14:24.792 }, 00:14:24.792 { 00:14:24.792 "name": "pt2", 00:14:24.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.792 "is_configured": true, 00:14:24.792 "data_offset": 2048, 00:14:24.792 "data_size": 63488 00:14:24.792 }, 00:14:24.792 { 00:14:24.792 "name": "pt3", 00:14:24.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.792 "is_configured": true, 00:14:24.792 "data_offset": 2048, 00:14:24.792 "data_size": 63488 00:14:24.792 } 00:14:24.792 ] 00:14:24.792 } 00:14:24.792 } 00:14:24.792 }' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:24.792 pt2 00:14:24.792 pt3' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:24.792 [2024-11-28 02:29:58.463715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2f809b76-75e3-4cf7-b476-425b7daf0f72 '!=' 2f809b76-75e3-4cf7-b476-425b7daf0f72 ']' 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.053 [2024-11-28 02:29:58.507509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.053 "name": "raid_bdev1", 00:14:25.053 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:25.053 "strip_size_kb": 64, 00:14:25.053 "state": "online", 00:14:25.053 "raid_level": "raid5f", 00:14:25.053 "superblock": true, 00:14:25.053 "num_base_bdevs": 3, 00:14:25.053 "num_base_bdevs_discovered": 2, 00:14:25.053 "num_base_bdevs_operational": 2, 00:14:25.053 "base_bdevs_list": [ 00:14:25.053 { 00:14:25.053 "name": null, 00:14:25.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.053 "is_configured": false, 00:14:25.053 "data_offset": 0, 00:14:25.053 "data_size": 63488 00:14:25.053 }, 00:14:25.053 { 00:14:25.053 "name": "pt2", 00:14:25.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.053 "is_configured": true, 00:14:25.053 "data_offset": 2048, 00:14:25.053 "data_size": 63488 00:14:25.053 }, 00:14:25.053 { 00:14:25.053 "name": "pt3", 00:14:25.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.053 "is_configured": true, 00:14:25.053 "data_offset": 2048, 00:14:25.053 "data_size": 63488 00:14:25.053 } 00:14:25.053 ] 00:14:25.053 }' 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.053 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.314 [2024-11-28 02:29:58.914827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.314 [2024-11-28 02:29:58.914909] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.314 [2024-11-28 02:29:58.915024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.314 [2024-11-28 02:29:58.915101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.314 [2024-11-28 02:29:58.915150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.314 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.314 [2024-11-28 02:29:58.990679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:25.314 [2024-11-28 02:29:58.990748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.314 [2024-11-28 02:29:58.990771] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:25.314 [2024-11-28 02:29:58.990785] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.575 [2024-11-28 02:29:58.993234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.575 [2024-11-28 02:29:58.993340] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:25.575 [2024-11-28 02:29:58.993446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:25.575 [2024-11-28 02:29:58.993508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:25.575 pt2 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.575 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.575 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.575 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.575 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.575 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.575 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.575 "name": "raid_bdev1", 00:14:25.575 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:25.575 "strip_size_kb": 64, 00:14:25.575 "state": "configuring", 00:14:25.575 "raid_level": "raid5f", 00:14:25.575 "superblock": true, 00:14:25.575 "num_base_bdevs": 3, 00:14:25.575 "num_base_bdevs_discovered": 1, 00:14:25.575 "num_base_bdevs_operational": 2, 00:14:25.575 "base_bdevs_list": [ 00:14:25.575 { 00:14:25.575 "name": null, 00:14:25.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.575 "is_configured": false, 00:14:25.575 "data_offset": 2048, 00:14:25.575 "data_size": 63488 00:14:25.575 }, 00:14:25.575 { 00:14:25.575 "name": "pt2", 00:14:25.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.575 "is_configured": true, 00:14:25.575 "data_offset": 2048, 00:14:25.575 "data_size": 63488 00:14:25.575 }, 00:14:25.575 { 00:14:25.575 "name": null, 00:14:25.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.575 "is_configured": false, 00:14:25.575 "data_offset": 2048, 00:14:25.575 "data_size": 63488 00:14:25.575 } 00:14:25.575 ] 00:14:25.575 }' 00:14:25.575 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.575 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.835 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.836 [2024-11-28 02:29:59.370088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:25.836 [2024-11-28 02:29:59.370240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.836 [2024-11-28 02:29:59.370277] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:25.836 [2024-11-28 02:29:59.370294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.836 [2024-11-28 02:29:59.370919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.836 [2024-11-28 02:29:59.370975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:25.836 [2024-11-28 02:29:59.371092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:25.836 [2024-11-28 02:29:59.371136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:25.836 [2024-11-28 02:29:59.371289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:25.836 [2024-11-28 02:29:59.371307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:25.836 [2024-11-28 02:29:59.371639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:25.836 [2024-11-28 02:29:59.376717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:25.836 [2024-11-28 02:29:59.376745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:25.836 [2024-11-28 02:29:59.377160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.836 pt3 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.836 "name": "raid_bdev1", 00:14:25.836 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:25.836 "strip_size_kb": 64, 00:14:25.836 "state": "online", 00:14:25.836 "raid_level": "raid5f", 00:14:25.836 "superblock": true, 00:14:25.836 "num_base_bdevs": 3, 00:14:25.836 "num_base_bdevs_discovered": 2, 00:14:25.836 "num_base_bdevs_operational": 2, 00:14:25.836 "base_bdevs_list": [ 00:14:25.836 { 00:14:25.836 "name": null, 00:14:25.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.836 "is_configured": false, 00:14:25.836 "data_offset": 2048, 00:14:25.836 "data_size": 63488 00:14:25.836 }, 00:14:25.836 { 00:14:25.836 "name": "pt2", 00:14:25.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.836 "is_configured": true, 00:14:25.836 "data_offset": 2048, 00:14:25.836 "data_size": 63488 00:14:25.836 }, 00:14:25.836 { 00:14:25.836 "name": "pt3", 00:14:25.836 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.836 "is_configured": true, 00:14:25.836 "data_offset": 2048, 00:14:25.836 "data_size": 63488 00:14:25.836 } 00:14:25.836 ] 00:14:25.836 }' 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.836 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 [2024-11-28 02:29:59.794967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.410 [2024-11-28 02:29:59.795046] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.410 [2024-11-28 02:29:59.795145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.410 [2024-11-28 02:29:59.795226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.410 [2024-11-28 02:29:59.795279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 [2024-11-28 02:29:59.870828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:26.410 [2024-11-28 02:29:59.870880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.410 [2024-11-28 02:29:59.870899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:26.410 [2024-11-28 02:29:59.870908] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.410 [2024-11-28 02:29:59.873213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.410 [2024-11-28 02:29:59.873248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:26.410 [2024-11-28 02:29:59.873335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:26.410 [2024-11-28 02:29:59.873383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:26.410 [2024-11-28 02:29:59.873521] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:26.410 [2024-11-28 02:29:59.873532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.410 [2024-11-28 02:29:59.873546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:26.410 [2024-11-28 02:29:59.873603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.410 pt1 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.410 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.411 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.411 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.411 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.411 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.411 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.411 "name": "raid_bdev1", 00:14:26.411 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:26.411 "strip_size_kb": 64, 00:14:26.411 "state": "configuring", 00:14:26.411 "raid_level": "raid5f", 00:14:26.411 "superblock": true, 00:14:26.411 "num_base_bdevs": 3, 00:14:26.411 "num_base_bdevs_discovered": 1, 00:14:26.411 "num_base_bdevs_operational": 2, 00:14:26.411 "base_bdevs_list": [ 00:14:26.411 { 00:14:26.411 "name": null, 00:14:26.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.411 "is_configured": false, 00:14:26.411 "data_offset": 2048, 00:14:26.411 "data_size": 63488 00:14:26.411 }, 00:14:26.411 { 00:14:26.411 "name": "pt2", 00:14:26.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.411 "is_configured": true, 00:14:26.411 "data_offset": 2048, 00:14:26.411 "data_size": 63488 00:14:26.411 }, 00:14:26.411 { 00:14:26.411 "name": null, 00:14:26.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.411 "is_configured": false, 00:14:26.411 "data_offset": 2048, 00:14:26.411 "data_size": 63488 00:14:26.411 } 00:14:26.411 ] 00:14:26.411 }' 00:14:26.411 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.411 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.679 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:26.679 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:26.679 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.679 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.679 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.979 [2024-11-28 02:30:00.381975] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:26.979 [2024-11-28 02:30:00.382078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.979 [2024-11-28 02:30:00.382116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:26.979 [2024-11-28 02:30:00.382145] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.979 [2024-11-28 02:30:00.382644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.979 [2024-11-28 02:30:00.382699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:26.979 [2024-11-28 02:30:00.382804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:26.979 [2024-11-28 02:30:00.382854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:26.979 [2024-11-28 02:30:00.383017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:26.979 [2024-11-28 02:30:00.383055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:26.979 [2024-11-28 02:30:00.383317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:26.979 [2024-11-28 02:30:00.389127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:26.979 [2024-11-28 02:30:00.389190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:26.979 [2024-11-28 02:30:00.389451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.979 pt3 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.979 "name": "raid_bdev1", 00:14:26.979 "uuid": "2f809b76-75e3-4cf7-b476-425b7daf0f72", 00:14:26.979 "strip_size_kb": 64, 00:14:26.979 "state": "online", 00:14:26.979 "raid_level": "raid5f", 00:14:26.979 "superblock": true, 00:14:26.979 "num_base_bdevs": 3, 00:14:26.979 "num_base_bdevs_discovered": 2, 00:14:26.979 "num_base_bdevs_operational": 2, 00:14:26.979 "base_bdevs_list": [ 00:14:26.979 { 00:14:26.979 "name": null, 00:14:26.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.979 "is_configured": false, 00:14:26.979 "data_offset": 2048, 00:14:26.979 "data_size": 63488 00:14:26.979 }, 00:14:26.979 { 00:14:26.979 "name": "pt2", 00:14:26.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.979 "is_configured": true, 00:14:26.979 "data_offset": 2048, 00:14:26.979 "data_size": 63488 00:14:26.979 }, 00:14:26.979 { 00:14:26.979 "name": "pt3", 00:14:26.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.979 "is_configured": true, 00:14:26.979 "data_offset": 2048, 00:14:26.979 "data_size": 63488 00:14:26.979 } 00:14:26.979 ] 00:14:26.979 }' 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.979 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:27.239 [2024-11-28 02:30:00.871603] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.239 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2f809b76-75e3-4cf7-b476-425b7daf0f72 '!=' 2f809b76-75e3-4cf7-b476-425b7daf0f72 ']' 00:14:27.240 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80896 00:14:27.240 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80896 ']' 00:14:27.240 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80896 00:14:27.240 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:27.500 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.500 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80896 00:14:27.500 killing process with pid 80896 00:14:27.500 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.500 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.500 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80896' 00:14:27.500 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80896 00:14:27.500 [2024-11-28 02:30:00.954943] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.500 [2024-11-28 02:30:00.955023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.500 [2024-11-28 02:30:00.955080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.500 [2024-11-28 02:30:00.955091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:27.500 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80896 00:14:27.761 [2024-11-28 02:30:01.231538] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.701 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:28.701 00:14:28.701 real 0m7.382s 00:14:28.701 user 0m11.519s 00:14:28.701 sys 0m1.313s 00:14:28.701 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.701 ************************************ 00:14:28.701 END TEST raid5f_superblock_test 00:14:28.701 ************************************ 00:14:28.701 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.701 02:30:02 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:28.701 02:30:02 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:28.701 02:30:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:28.701 02:30:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.701 02:30:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.701 ************************************ 00:14:28.701 START TEST raid5f_rebuild_test 00:14:28.701 ************************************ 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81334 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:28.701 02:30:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81334 00:14:28.961 02:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81334 ']' 00:14:28.961 02:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.961 02:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.961 02:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.961 02:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.961 02:30:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.961 [2024-11-28 02:30:02.460724] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:28.961 [2024-11-28 02:30:02.460924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:28.961 Zero copy mechanism will not be used. 00:14:28.961 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81334 ] 00:14:28.961 [2024-11-28 02:30:02.634613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.221 [2024-11-28 02:30:02.737042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.481 [2024-11-28 02:30:02.921377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.481 [2024-11-28 02:30:02.921404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.741 BaseBdev1_malloc 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.741 [2024-11-28 02:30:03.315643] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:29.741 [2024-11-28 02:30:03.315702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.741 [2024-11-28 02:30:03.315739] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.741 [2024-11-28 02:30:03.315750] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.741 [2024-11-28 02:30:03.317779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.741 [2024-11-28 02:30:03.317818] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.741 BaseBdev1 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.741 BaseBdev2_malloc 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.741 [2024-11-28 02:30:03.365026] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:29.741 [2024-11-28 02:30:03.365123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.741 [2024-11-28 02:30:03.365163] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.741 [2024-11-28 02:30:03.365174] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.741 [2024-11-28 02:30:03.367133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.741 [2024-11-28 02:30:03.367168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.741 BaseBdev2 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.741 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.002 BaseBdev3_malloc 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.002 [2024-11-28 02:30:03.452962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:30.002 [2024-11-28 02:30:03.453064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.002 [2024-11-28 02:30:03.453105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:30.002 [2024-11-28 02:30:03.453117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.002 [2024-11-28 02:30:03.455141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.002 [2024-11-28 02:30:03.455176] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:30.002 BaseBdev3 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.002 spare_malloc 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.002 spare_delay 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.002 [2024-11-28 02:30:03.519533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:30.002 [2024-11-28 02:30:03.519584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.002 [2024-11-28 02:30:03.519600] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:30.002 [2024-11-28 02:30:03.519610] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.002 [2024-11-28 02:30:03.521651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.002 [2024-11-28 02:30:03.521692] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:30.002 spare 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.002 [2024-11-28 02:30:03.531573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.002 [2024-11-28 02:30:03.533313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.002 [2024-11-28 02:30:03.533372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.002 [2024-11-28 02:30:03.533446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:30.002 [2024-11-28 02:30:03.533457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:30.002 [2024-11-28 02:30:03.533682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:30.002 [2024-11-28 02:30:03.539034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:30.002 [2024-11-28 02:30:03.539055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:30.002 [2024-11-28 02:30:03.539228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.002 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.003 "name": "raid_bdev1", 00:14:30.003 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:30.003 "strip_size_kb": 64, 00:14:30.003 "state": "online", 00:14:30.003 "raid_level": "raid5f", 00:14:30.003 "superblock": false, 00:14:30.003 "num_base_bdevs": 3, 00:14:30.003 "num_base_bdevs_discovered": 3, 00:14:30.003 "num_base_bdevs_operational": 3, 00:14:30.003 "base_bdevs_list": [ 00:14:30.003 { 00:14:30.003 "name": "BaseBdev1", 00:14:30.003 "uuid": "1e7a527c-adb6-58f5-8582-e6f613960bb0", 00:14:30.003 "is_configured": true, 00:14:30.003 "data_offset": 0, 00:14:30.003 "data_size": 65536 00:14:30.003 }, 00:14:30.003 { 00:14:30.003 "name": "BaseBdev2", 00:14:30.003 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:30.003 "is_configured": true, 00:14:30.003 "data_offset": 0, 00:14:30.003 "data_size": 65536 00:14:30.003 }, 00:14:30.003 { 00:14:30.003 "name": "BaseBdev3", 00:14:30.003 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:30.003 "is_configured": true, 00:14:30.003 "data_offset": 0, 00:14:30.003 "data_size": 65536 00:14:30.003 } 00:14:30.003 ] 00:14:30.003 }' 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.003 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.573 [2024-11-28 02:30:04.012734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.573 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:30.833 [2024-11-28 02:30:04.252184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:30.833 /dev/nbd0 00:14:30.833 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:30.833 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:30.833 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:30.833 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:30.833 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:30.833 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:30.833 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:30.833 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:30.833 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:30.834 1+0 records in 00:14:30.834 1+0 records out 00:14:30.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418597 s, 9.8 MB/s 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:30.834 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:31.093 512+0 records in 00:14:31.093 512+0 records out 00:14:31.093 67108864 bytes (67 MB, 64 MiB) copied, 0.358459 s, 187 MB/s 00:14:31.093 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:31.093 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.093 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:31.093 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.093 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:31.093 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.093 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:31.352 [2024-11-28 02:30:04.912018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.352 [2024-11-28 02:30:04.927244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.352 "name": "raid_bdev1", 00:14:31.352 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:31.352 "strip_size_kb": 64, 00:14:31.352 "state": "online", 00:14:31.352 "raid_level": "raid5f", 00:14:31.352 "superblock": false, 00:14:31.352 "num_base_bdevs": 3, 00:14:31.352 "num_base_bdevs_discovered": 2, 00:14:31.352 "num_base_bdevs_operational": 2, 00:14:31.352 "base_bdevs_list": [ 00:14:31.352 { 00:14:31.352 "name": null, 00:14:31.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.352 "is_configured": false, 00:14:31.352 "data_offset": 0, 00:14:31.352 "data_size": 65536 00:14:31.352 }, 00:14:31.352 { 00:14:31.352 "name": "BaseBdev2", 00:14:31.352 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:31.352 "is_configured": true, 00:14:31.352 "data_offset": 0, 00:14:31.352 "data_size": 65536 00:14:31.352 }, 00:14:31.352 { 00:14:31.352 "name": "BaseBdev3", 00:14:31.352 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:31.352 "is_configured": true, 00:14:31.352 "data_offset": 0, 00:14:31.352 "data_size": 65536 00:14:31.352 } 00:14:31.352 ] 00:14:31.352 }' 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.352 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.921 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:31.921 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.921 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.921 [2024-11-28 02:30:05.366480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.921 [2024-11-28 02:30:05.382244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:31.921 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.921 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:31.921 [2024-11-28 02:30:05.389074] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.860 "name": "raid_bdev1", 00:14:32.860 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:32.860 "strip_size_kb": 64, 00:14:32.860 "state": "online", 00:14:32.860 "raid_level": "raid5f", 00:14:32.860 "superblock": false, 00:14:32.860 "num_base_bdevs": 3, 00:14:32.860 "num_base_bdevs_discovered": 3, 00:14:32.860 "num_base_bdevs_operational": 3, 00:14:32.860 "process": { 00:14:32.860 "type": "rebuild", 00:14:32.860 "target": "spare", 00:14:32.860 "progress": { 00:14:32.860 "blocks": 20480, 00:14:32.860 "percent": 15 00:14:32.860 } 00:14:32.860 }, 00:14:32.860 "base_bdevs_list": [ 00:14:32.860 { 00:14:32.860 "name": "spare", 00:14:32.860 "uuid": "5f946468-d285-54d3-9529-7bfb39866136", 00:14:32.860 "is_configured": true, 00:14:32.860 "data_offset": 0, 00:14:32.860 "data_size": 65536 00:14:32.860 }, 00:14:32.860 { 00:14:32.860 "name": "BaseBdev2", 00:14:32.860 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:32.860 "is_configured": true, 00:14:32.860 "data_offset": 0, 00:14:32.860 "data_size": 65536 00:14:32.860 }, 00:14:32.860 { 00:14:32.860 "name": "BaseBdev3", 00:14:32.860 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:32.860 "is_configured": true, 00:14:32.860 "data_offset": 0, 00:14:32.860 "data_size": 65536 00:14:32.860 } 00:14:32.860 ] 00:14:32.860 }' 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.860 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.122 [2024-11-28 02:30:06.548165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.122 [2024-11-28 02:30:06.597088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.122 [2024-11-28 02:30:06.597158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.122 [2024-11-28 02:30:06.597179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.122 [2024-11-28 02:30:06.597189] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.122 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.123 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.123 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.123 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.123 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.123 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.123 "name": "raid_bdev1", 00:14:33.123 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:33.123 "strip_size_kb": 64, 00:14:33.123 "state": "online", 00:14:33.123 "raid_level": "raid5f", 00:14:33.123 "superblock": false, 00:14:33.123 "num_base_bdevs": 3, 00:14:33.123 "num_base_bdevs_discovered": 2, 00:14:33.123 "num_base_bdevs_operational": 2, 00:14:33.123 "base_bdevs_list": [ 00:14:33.123 { 00:14:33.123 "name": null, 00:14:33.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.123 "is_configured": false, 00:14:33.123 "data_offset": 0, 00:14:33.123 "data_size": 65536 00:14:33.123 }, 00:14:33.123 { 00:14:33.123 "name": "BaseBdev2", 00:14:33.123 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:33.123 "is_configured": true, 00:14:33.123 "data_offset": 0, 00:14:33.123 "data_size": 65536 00:14:33.123 }, 00:14:33.123 { 00:14:33.123 "name": "BaseBdev3", 00:14:33.123 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:33.123 "is_configured": true, 00:14:33.123 "data_offset": 0, 00:14:33.123 "data_size": 65536 00:14:33.123 } 00:14:33.123 ] 00:14:33.123 }' 00:14:33.123 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.123 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.382 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.383 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.383 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.383 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.383 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.383 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.383 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.383 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.383 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.643 "name": "raid_bdev1", 00:14:33.643 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:33.643 "strip_size_kb": 64, 00:14:33.643 "state": "online", 00:14:33.643 "raid_level": "raid5f", 00:14:33.643 "superblock": false, 00:14:33.643 "num_base_bdevs": 3, 00:14:33.643 "num_base_bdevs_discovered": 2, 00:14:33.643 "num_base_bdevs_operational": 2, 00:14:33.643 "base_bdevs_list": [ 00:14:33.643 { 00:14:33.643 "name": null, 00:14:33.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.643 "is_configured": false, 00:14:33.643 "data_offset": 0, 00:14:33.643 "data_size": 65536 00:14:33.643 }, 00:14:33.643 { 00:14:33.643 "name": "BaseBdev2", 00:14:33.643 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:33.643 "is_configured": true, 00:14:33.643 "data_offset": 0, 00:14:33.643 "data_size": 65536 00:14:33.643 }, 00:14:33.643 { 00:14:33.643 "name": "BaseBdev3", 00:14:33.643 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:33.643 "is_configured": true, 00:14:33.643 "data_offset": 0, 00:14:33.643 "data_size": 65536 00:14:33.643 } 00:14:33.643 ] 00:14:33.643 }' 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.643 [2024-11-28 02:30:07.191474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.643 [2024-11-28 02:30:07.206526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.643 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:33.643 [2024-11-28 02:30:07.213211] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.583 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.583 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.583 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.583 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.583 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.583 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.583 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.583 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.583 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.583 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.844 "name": "raid_bdev1", 00:14:34.844 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:34.844 "strip_size_kb": 64, 00:14:34.844 "state": "online", 00:14:34.844 "raid_level": "raid5f", 00:14:34.844 "superblock": false, 00:14:34.844 "num_base_bdevs": 3, 00:14:34.844 "num_base_bdevs_discovered": 3, 00:14:34.844 "num_base_bdevs_operational": 3, 00:14:34.844 "process": { 00:14:34.844 "type": "rebuild", 00:14:34.844 "target": "spare", 00:14:34.844 "progress": { 00:14:34.844 "blocks": 20480, 00:14:34.844 "percent": 15 00:14:34.844 } 00:14:34.844 }, 00:14:34.844 "base_bdevs_list": [ 00:14:34.844 { 00:14:34.844 "name": "spare", 00:14:34.844 "uuid": "5f946468-d285-54d3-9529-7bfb39866136", 00:14:34.844 "is_configured": true, 00:14:34.844 "data_offset": 0, 00:14:34.844 "data_size": 65536 00:14:34.844 }, 00:14:34.844 { 00:14:34.844 "name": "BaseBdev2", 00:14:34.844 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:34.844 "is_configured": true, 00:14:34.844 "data_offset": 0, 00:14:34.844 "data_size": 65536 00:14:34.844 }, 00:14:34.844 { 00:14:34.844 "name": "BaseBdev3", 00:14:34.844 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:34.844 "is_configured": true, 00:14:34.844 "data_offset": 0, 00:14:34.844 "data_size": 65536 00:14:34.844 } 00:14:34.844 ] 00:14:34.844 }' 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=538 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.844 "name": "raid_bdev1", 00:14:34.844 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:34.844 "strip_size_kb": 64, 00:14:34.844 "state": "online", 00:14:34.844 "raid_level": "raid5f", 00:14:34.844 "superblock": false, 00:14:34.844 "num_base_bdevs": 3, 00:14:34.844 "num_base_bdevs_discovered": 3, 00:14:34.844 "num_base_bdevs_operational": 3, 00:14:34.844 "process": { 00:14:34.844 "type": "rebuild", 00:14:34.844 "target": "spare", 00:14:34.844 "progress": { 00:14:34.844 "blocks": 22528, 00:14:34.844 "percent": 17 00:14:34.844 } 00:14:34.844 }, 00:14:34.844 "base_bdevs_list": [ 00:14:34.844 { 00:14:34.844 "name": "spare", 00:14:34.844 "uuid": "5f946468-d285-54d3-9529-7bfb39866136", 00:14:34.844 "is_configured": true, 00:14:34.844 "data_offset": 0, 00:14:34.844 "data_size": 65536 00:14:34.844 }, 00:14:34.844 { 00:14:34.844 "name": "BaseBdev2", 00:14:34.844 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:34.844 "is_configured": true, 00:14:34.844 "data_offset": 0, 00:14:34.844 "data_size": 65536 00:14:34.844 }, 00:14:34.844 { 00:14:34.844 "name": "BaseBdev3", 00:14:34.844 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:34.844 "is_configured": true, 00:14:34.844 "data_offset": 0, 00:14:34.844 "data_size": 65536 00:14:34.844 } 00:14:34.844 ] 00:14:34.844 }' 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.844 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.242 "name": "raid_bdev1", 00:14:36.242 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:36.242 "strip_size_kb": 64, 00:14:36.242 "state": "online", 00:14:36.242 "raid_level": "raid5f", 00:14:36.242 "superblock": false, 00:14:36.242 "num_base_bdevs": 3, 00:14:36.242 "num_base_bdevs_discovered": 3, 00:14:36.242 "num_base_bdevs_operational": 3, 00:14:36.242 "process": { 00:14:36.242 "type": "rebuild", 00:14:36.242 "target": "spare", 00:14:36.242 "progress": { 00:14:36.242 "blocks": 47104, 00:14:36.242 "percent": 35 00:14:36.242 } 00:14:36.242 }, 00:14:36.242 "base_bdevs_list": [ 00:14:36.242 { 00:14:36.242 "name": "spare", 00:14:36.242 "uuid": "5f946468-d285-54d3-9529-7bfb39866136", 00:14:36.242 "is_configured": true, 00:14:36.242 "data_offset": 0, 00:14:36.242 "data_size": 65536 00:14:36.242 }, 00:14:36.242 { 00:14:36.242 "name": "BaseBdev2", 00:14:36.242 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:36.242 "is_configured": true, 00:14:36.242 "data_offset": 0, 00:14:36.242 "data_size": 65536 00:14:36.242 }, 00:14:36.242 { 00:14:36.242 "name": "BaseBdev3", 00:14:36.242 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:36.242 "is_configured": true, 00:14:36.242 "data_offset": 0, 00:14:36.242 "data_size": 65536 00:14:36.242 } 00:14:36.242 ] 00:14:36.242 }' 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.242 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.181 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.181 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.181 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.181 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.181 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.181 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.181 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.182 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.182 02:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.182 02:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.182 02:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.182 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.182 "name": "raid_bdev1", 00:14:37.182 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:37.182 "strip_size_kb": 64, 00:14:37.182 "state": "online", 00:14:37.182 "raid_level": "raid5f", 00:14:37.182 "superblock": false, 00:14:37.182 "num_base_bdevs": 3, 00:14:37.182 "num_base_bdevs_discovered": 3, 00:14:37.182 "num_base_bdevs_operational": 3, 00:14:37.182 "process": { 00:14:37.182 "type": "rebuild", 00:14:37.182 "target": "spare", 00:14:37.182 "progress": { 00:14:37.182 "blocks": 69632, 00:14:37.182 "percent": 53 00:14:37.182 } 00:14:37.182 }, 00:14:37.182 "base_bdevs_list": [ 00:14:37.182 { 00:14:37.182 "name": "spare", 00:14:37.182 "uuid": "5f946468-d285-54d3-9529-7bfb39866136", 00:14:37.182 "is_configured": true, 00:14:37.182 "data_offset": 0, 00:14:37.182 "data_size": 65536 00:14:37.182 }, 00:14:37.182 { 00:14:37.182 "name": "BaseBdev2", 00:14:37.182 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:37.182 "is_configured": true, 00:14:37.182 "data_offset": 0, 00:14:37.182 "data_size": 65536 00:14:37.182 }, 00:14:37.182 { 00:14:37.182 "name": "BaseBdev3", 00:14:37.182 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:37.182 "is_configured": true, 00:14:37.182 "data_offset": 0, 00:14:37.182 "data_size": 65536 00:14:37.182 } 00:14:37.182 ] 00:14:37.182 }' 00:14:37.182 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.182 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.182 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.182 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.182 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.564 "name": "raid_bdev1", 00:14:38.564 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:38.564 "strip_size_kb": 64, 00:14:38.564 "state": "online", 00:14:38.564 "raid_level": "raid5f", 00:14:38.564 "superblock": false, 00:14:38.564 "num_base_bdevs": 3, 00:14:38.564 "num_base_bdevs_discovered": 3, 00:14:38.564 "num_base_bdevs_operational": 3, 00:14:38.564 "process": { 00:14:38.564 "type": "rebuild", 00:14:38.564 "target": "spare", 00:14:38.564 "progress": { 00:14:38.564 "blocks": 92160, 00:14:38.564 "percent": 70 00:14:38.564 } 00:14:38.564 }, 00:14:38.564 "base_bdevs_list": [ 00:14:38.564 { 00:14:38.564 "name": "spare", 00:14:38.564 "uuid": "5f946468-d285-54d3-9529-7bfb39866136", 00:14:38.564 "is_configured": true, 00:14:38.564 "data_offset": 0, 00:14:38.564 "data_size": 65536 00:14:38.564 }, 00:14:38.564 { 00:14:38.564 "name": "BaseBdev2", 00:14:38.564 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:38.564 "is_configured": true, 00:14:38.564 "data_offset": 0, 00:14:38.564 "data_size": 65536 00:14:38.564 }, 00:14:38.564 { 00:14:38.564 "name": "BaseBdev3", 00:14:38.564 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:38.564 "is_configured": true, 00:14:38.564 "data_offset": 0, 00:14:38.564 "data_size": 65536 00:14:38.564 } 00:14:38.564 ] 00:14:38.564 }' 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.564 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.505 "name": "raid_bdev1", 00:14:39.505 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:39.505 "strip_size_kb": 64, 00:14:39.505 "state": "online", 00:14:39.505 "raid_level": "raid5f", 00:14:39.505 "superblock": false, 00:14:39.505 "num_base_bdevs": 3, 00:14:39.505 "num_base_bdevs_discovered": 3, 00:14:39.505 "num_base_bdevs_operational": 3, 00:14:39.505 "process": { 00:14:39.505 "type": "rebuild", 00:14:39.505 "target": "spare", 00:14:39.505 "progress": { 00:14:39.505 "blocks": 116736, 00:14:39.505 "percent": 89 00:14:39.505 } 00:14:39.505 }, 00:14:39.505 "base_bdevs_list": [ 00:14:39.505 { 00:14:39.505 "name": "spare", 00:14:39.505 "uuid": "5f946468-d285-54d3-9529-7bfb39866136", 00:14:39.505 "is_configured": true, 00:14:39.505 "data_offset": 0, 00:14:39.505 "data_size": 65536 00:14:39.505 }, 00:14:39.505 { 00:14:39.505 "name": "BaseBdev2", 00:14:39.505 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:39.505 "is_configured": true, 00:14:39.505 "data_offset": 0, 00:14:39.505 "data_size": 65536 00:14:39.505 }, 00:14:39.505 { 00:14:39.505 "name": "BaseBdev3", 00:14:39.505 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:39.505 "is_configured": true, 00:14:39.505 "data_offset": 0, 00:14:39.505 "data_size": 65536 00:14:39.505 } 00:14:39.505 ] 00:14:39.505 }' 00:14:39.505 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.505 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.505 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.505 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.505 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.078 [2024-11-28 02:30:13.649914] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:40.078 [2024-11-28 02:30:13.649985] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:40.078 [2024-11-28 02:30:13.650023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.649 "name": "raid_bdev1", 00:14:40.649 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:40.649 "strip_size_kb": 64, 00:14:40.649 "state": "online", 00:14:40.649 "raid_level": "raid5f", 00:14:40.649 "superblock": false, 00:14:40.649 "num_base_bdevs": 3, 00:14:40.649 "num_base_bdevs_discovered": 3, 00:14:40.649 "num_base_bdevs_operational": 3, 00:14:40.649 "base_bdevs_list": [ 00:14:40.649 { 00:14:40.649 "name": "spare", 00:14:40.649 "uuid": "5f946468-d285-54d3-9529-7bfb39866136", 00:14:40.649 "is_configured": true, 00:14:40.649 "data_offset": 0, 00:14:40.649 "data_size": 65536 00:14:40.649 }, 00:14:40.649 { 00:14:40.649 "name": "BaseBdev2", 00:14:40.649 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:40.649 "is_configured": true, 00:14:40.649 "data_offset": 0, 00:14:40.649 "data_size": 65536 00:14:40.649 }, 00:14:40.649 { 00:14:40.649 "name": "BaseBdev3", 00:14:40.649 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:40.649 "is_configured": true, 00:14:40.649 "data_offset": 0, 00:14:40.649 "data_size": 65536 00:14:40.649 } 00:14:40.649 ] 00:14:40.649 }' 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.649 "name": "raid_bdev1", 00:14:40.649 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:40.649 "strip_size_kb": 64, 00:14:40.649 "state": "online", 00:14:40.649 "raid_level": "raid5f", 00:14:40.649 "superblock": false, 00:14:40.649 "num_base_bdevs": 3, 00:14:40.649 "num_base_bdevs_discovered": 3, 00:14:40.649 "num_base_bdevs_operational": 3, 00:14:40.649 "base_bdevs_list": [ 00:14:40.649 { 00:14:40.649 "name": "spare", 00:14:40.649 "uuid": "5f946468-d285-54d3-9529-7bfb39866136", 00:14:40.649 "is_configured": true, 00:14:40.649 "data_offset": 0, 00:14:40.649 "data_size": 65536 00:14:40.649 }, 00:14:40.649 { 00:14:40.649 "name": "BaseBdev2", 00:14:40.649 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:40.649 "is_configured": true, 00:14:40.649 "data_offset": 0, 00:14:40.649 "data_size": 65536 00:14:40.649 }, 00:14:40.649 { 00:14:40.649 "name": "BaseBdev3", 00:14:40.649 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:40.649 "is_configured": true, 00:14:40.649 "data_offset": 0, 00:14:40.649 "data_size": 65536 00:14:40.649 } 00:14:40.649 ] 00:14:40.649 }' 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.649 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.909 "name": "raid_bdev1", 00:14:40.909 "uuid": "dd5a1420-4622-4ca7-97b9-96303160a83d", 00:14:40.909 "strip_size_kb": 64, 00:14:40.909 "state": "online", 00:14:40.909 "raid_level": "raid5f", 00:14:40.909 "superblock": false, 00:14:40.909 "num_base_bdevs": 3, 00:14:40.909 "num_base_bdevs_discovered": 3, 00:14:40.909 "num_base_bdevs_operational": 3, 00:14:40.909 "base_bdevs_list": [ 00:14:40.909 { 00:14:40.909 "name": "spare", 00:14:40.909 "uuid": "5f946468-d285-54d3-9529-7bfb39866136", 00:14:40.909 "is_configured": true, 00:14:40.909 "data_offset": 0, 00:14:40.909 "data_size": 65536 00:14:40.909 }, 00:14:40.909 { 00:14:40.909 "name": "BaseBdev2", 00:14:40.909 "uuid": "5c8ae5f9-ab6c-5528-a131-a8cccf01574c", 00:14:40.909 "is_configured": true, 00:14:40.909 "data_offset": 0, 00:14:40.909 "data_size": 65536 00:14:40.909 }, 00:14:40.909 { 00:14:40.909 "name": "BaseBdev3", 00:14:40.909 "uuid": "d6315f60-d473-5cad-88c9-9fc5beb013f8", 00:14:40.909 "is_configured": true, 00:14:40.909 "data_offset": 0, 00:14:40.909 "data_size": 65536 00:14:40.909 } 00:14:40.909 ] 00:14:40.909 }' 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.909 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.169 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.170 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.170 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.170 [2024-11-28 02:30:14.821202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.170 [2024-11-28 02:30:14.821229] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.170 [2024-11-28 02:30:14.821310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.170 [2024-11-28 02:30:14.821385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.170 [2024-11-28 02:30:14.821399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:41.170 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.170 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.170 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.170 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.170 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:41.170 02:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.430 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:41.430 /dev/nbd0 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.430 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.690 1+0 records in 00:14:41.690 1+0 records out 00:14:41.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500004 s, 8.2 MB/s 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:41.690 /dev/nbd1 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.690 1+0 records in 00:14:41.690 1+0 records out 00:14:41.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295614 s, 13.9 MB/s 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.690 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:41.950 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:41.950 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.950 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:41.950 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.950 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:41.950 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.950 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:42.212 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:42.212 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:42.212 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:42.212 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.212 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.212 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:42.212 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:42.212 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.212 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.212 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81334 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81334 ']' 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81334 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81334 00:14:42.472 killing process with pid 81334 00:14:42.472 Received shutdown signal, test time was about 60.000000 seconds 00:14:42.472 00:14:42.472 Latency(us) 00:14:42.472 [2024-11-28T02:30:16.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.472 [2024-11-28T02:30:16.151Z] =================================================================================================================== 00:14:42.472 [2024-11-28T02:30:16.151Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81334' 00:14:42.472 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81334 00:14:42.472 [2024-11-28 02:30:15.981990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.473 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81334 00:14:42.732 [2024-11-28 02:30:16.348435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.114 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:44.114 00:14:44.114 real 0m15.010s 00:14:44.114 user 0m18.482s 00:14:44.114 sys 0m1.937s 00:14:44.114 02:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.114 02:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.114 ************************************ 00:14:44.114 END TEST raid5f_rebuild_test 00:14:44.114 ************************************ 00:14:44.114 02:30:17 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:44.114 02:30:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:44.114 02:30:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.114 02:30:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.114 ************************************ 00:14:44.114 START TEST raid5f_rebuild_test_sb 00:14:44.114 ************************************ 00:14:44.114 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:14:44.114 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:44.114 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81774 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81774 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81774 ']' 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.115 02:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.115 [2024-11-28 02:30:17.543983] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:44.115 [2024-11-28 02:30:17.544151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:44.115 Zero copy mechanism will not be used. 00:14:44.115 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81774 ] 00:14:44.115 [2024-11-28 02:30:17.715864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.375 [2024-11-28 02:30:17.818301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.375 [2024-11-28 02:30:18.014384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.375 [2024-11-28 02:30:18.014428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.945 BaseBdev1_malloc 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.945 [2024-11-28 02:30:18.400332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.945 [2024-11-28 02:30:18.400444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.945 [2024-11-28 02:30:18.400486] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:44.945 [2024-11-28 02:30:18.400497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.945 [2024-11-28 02:30:18.402507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.945 [2024-11-28 02:30:18.402547] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.945 BaseBdev1 00:14:44.945 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.946 BaseBdev2_malloc 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.946 [2024-11-28 02:30:18.449579] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:44.946 [2024-11-28 02:30:18.449650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.946 [2024-11-28 02:30:18.449671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:44.946 [2024-11-28 02:30:18.449681] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.946 [2024-11-28 02:30:18.451637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.946 [2024-11-28 02:30:18.451723] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.946 BaseBdev2 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.946 BaseBdev3_malloc 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.946 [2024-11-28 02:30:18.538659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:44.946 [2024-11-28 02:30:18.538708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.946 [2024-11-28 02:30:18.538743] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:44.946 [2024-11-28 02:30:18.538753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.946 [2024-11-28 02:30:18.540703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.946 [2024-11-28 02:30:18.540816] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:44.946 BaseBdev3 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.946 spare_malloc 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.946 spare_delay 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.946 [2024-11-28 02:30:18.604464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.946 [2024-11-28 02:30:18.604514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.946 [2024-11-28 02:30:18.604531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:44.946 [2024-11-28 02:30:18.604540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.946 [2024-11-28 02:30:18.606657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.946 [2024-11-28 02:30:18.606731] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.946 spare 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.946 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.946 [2024-11-28 02:30:18.616496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.946 [2024-11-28 02:30:18.618226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.946 [2024-11-28 02:30:18.618283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.946 [2024-11-28 02:30:18.618453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:44.946 [2024-11-28 02:30:18.618465] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:44.946 [2024-11-28 02:30:18.618706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:45.206 [2024-11-28 02:30:18.624211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:45.206 [2024-11-28 02:30:18.624235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:45.206 [2024-11-28 02:30:18.624398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.206 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.206 "name": "raid_bdev1", 00:14:45.206 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:45.206 "strip_size_kb": 64, 00:14:45.206 "state": "online", 00:14:45.206 "raid_level": "raid5f", 00:14:45.206 "superblock": true, 00:14:45.206 "num_base_bdevs": 3, 00:14:45.206 "num_base_bdevs_discovered": 3, 00:14:45.206 "num_base_bdevs_operational": 3, 00:14:45.206 "base_bdevs_list": [ 00:14:45.207 { 00:14:45.207 "name": "BaseBdev1", 00:14:45.207 "uuid": "98cb7c8c-c06d-533d-9b5b-2f310c55caf7", 00:14:45.207 "is_configured": true, 00:14:45.207 "data_offset": 2048, 00:14:45.207 "data_size": 63488 00:14:45.207 }, 00:14:45.207 { 00:14:45.207 "name": "BaseBdev2", 00:14:45.207 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:45.207 "is_configured": true, 00:14:45.207 "data_offset": 2048, 00:14:45.207 "data_size": 63488 00:14:45.207 }, 00:14:45.207 { 00:14:45.207 "name": "BaseBdev3", 00:14:45.207 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:45.207 "is_configured": true, 00:14:45.207 "data_offset": 2048, 00:14:45.207 "data_size": 63488 00:14:45.207 } 00:14:45.207 ] 00:14:45.207 }' 00:14:45.207 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.207 02:30:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.467 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.467 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:45.468 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.468 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.468 [2024-11-28 02:30:19.065906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.468 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.468 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:45.468 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.468 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:45.468 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.468 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.468 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.727 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:45.728 [2024-11-28 02:30:19.333327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:45.728 /dev/nbd0 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.728 1+0 records in 00:14:45.728 1+0 records out 00:14:45.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385753 s, 10.6 MB/s 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:45.728 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.988 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.988 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:45.988 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.988 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.988 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:45.988 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:45.988 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:45.988 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:46.248 496+0 records in 00:14:46.248 496+0 records out 00:14:46.248 65011712 bytes (65 MB, 62 MiB) copied, 0.351131 s, 185 MB/s 00:14:46.248 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:46.248 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.248 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.248 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.248 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:46.248 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.248 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.508 [2024-11-28 02:30:19.967020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.508 [2024-11-28 02:30:19.982443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.508 02:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.508 02:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.508 02:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.508 "name": "raid_bdev1", 00:14:46.508 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:46.508 "strip_size_kb": 64, 00:14:46.508 "state": "online", 00:14:46.508 "raid_level": "raid5f", 00:14:46.508 "superblock": true, 00:14:46.508 "num_base_bdevs": 3, 00:14:46.508 "num_base_bdevs_discovered": 2, 00:14:46.508 "num_base_bdevs_operational": 2, 00:14:46.508 "base_bdevs_list": [ 00:14:46.508 { 00:14:46.508 "name": null, 00:14:46.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.508 "is_configured": false, 00:14:46.508 "data_offset": 0, 00:14:46.508 "data_size": 63488 00:14:46.508 }, 00:14:46.508 { 00:14:46.508 "name": "BaseBdev2", 00:14:46.508 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:46.508 "is_configured": true, 00:14:46.508 "data_offset": 2048, 00:14:46.508 "data_size": 63488 00:14:46.508 }, 00:14:46.508 { 00:14:46.508 "name": "BaseBdev3", 00:14:46.508 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:46.508 "is_configured": true, 00:14:46.508 "data_offset": 2048, 00:14:46.508 "data_size": 63488 00:14:46.508 } 00:14:46.508 ] 00:14:46.508 }' 00:14:46.508 02:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.508 02:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.077 02:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.077 02:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.077 02:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.077 [2024-11-28 02:30:20.453611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.077 [2024-11-28 02:30:20.469363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:14:47.077 02:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.077 02:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:47.077 [2024-11-28 02:30:20.476385] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.018 "name": "raid_bdev1", 00:14:48.018 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:48.018 "strip_size_kb": 64, 00:14:48.018 "state": "online", 00:14:48.018 "raid_level": "raid5f", 00:14:48.018 "superblock": true, 00:14:48.018 "num_base_bdevs": 3, 00:14:48.018 "num_base_bdevs_discovered": 3, 00:14:48.018 "num_base_bdevs_operational": 3, 00:14:48.018 "process": { 00:14:48.018 "type": "rebuild", 00:14:48.018 "target": "spare", 00:14:48.018 "progress": { 00:14:48.018 "blocks": 20480, 00:14:48.018 "percent": 16 00:14:48.018 } 00:14:48.018 }, 00:14:48.018 "base_bdevs_list": [ 00:14:48.018 { 00:14:48.018 "name": "spare", 00:14:48.018 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:48.018 "is_configured": true, 00:14:48.018 "data_offset": 2048, 00:14:48.018 "data_size": 63488 00:14:48.018 }, 00:14:48.018 { 00:14:48.018 "name": "BaseBdev2", 00:14:48.018 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:48.018 "is_configured": true, 00:14:48.018 "data_offset": 2048, 00:14:48.018 "data_size": 63488 00:14:48.018 }, 00:14:48.018 { 00:14:48.018 "name": "BaseBdev3", 00:14:48.018 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:48.018 "is_configured": true, 00:14:48.018 "data_offset": 2048, 00:14:48.018 "data_size": 63488 00:14:48.018 } 00:14:48.018 ] 00:14:48.018 }' 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.018 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.018 [2024-11-28 02:30:21.619781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.018 [2024-11-28 02:30:21.683494] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.018 [2024-11-28 02:30:21.683544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.018 [2024-11-28 02:30:21.683559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.018 [2024-11-28 02:30:21.683566] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.279 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.279 "name": "raid_bdev1", 00:14:48.279 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:48.279 "strip_size_kb": 64, 00:14:48.279 "state": "online", 00:14:48.279 "raid_level": "raid5f", 00:14:48.279 "superblock": true, 00:14:48.279 "num_base_bdevs": 3, 00:14:48.279 "num_base_bdevs_discovered": 2, 00:14:48.279 "num_base_bdevs_operational": 2, 00:14:48.279 "base_bdevs_list": [ 00:14:48.279 { 00:14:48.279 "name": null, 00:14:48.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.279 "is_configured": false, 00:14:48.279 "data_offset": 0, 00:14:48.279 "data_size": 63488 00:14:48.279 }, 00:14:48.279 { 00:14:48.279 "name": "BaseBdev2", 00:14:48.279 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:48.279 "is_configured": true, 00:14:48.279 "data_offset": 2048, 00:14:48.279 "data_size": 63488 00:14:48.280 }, 00:14:48.280 { 00:14:48.280 "name": "BaseBdev3", 00:14:48.280 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:48.280 "is_configured": true, 00:14:48.280 "data_offset": 2048, 00:14:48.280 "data_size": 63488 00:14:48.280 } 00:14:48.280 ] 00:14:48.280 }' 00:14:48.280 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.280 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.540 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.540 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.540 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.540 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.540 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.540 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.540 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.540 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.540 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.540 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.800 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.801 "name": "raid_bdev1", 00:14:48.801 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:48.801 "strip_size_kb": 64, 00:14:48.801 "state": "online", 00:14:48.801 "raid_level": "raid5f", 00:14:48.801 "superblock": true, 00:14:48.801 "num_base_bdevs": 3, 00:14:48.801 "num_base_bdevs_discovered": 2, 00:14:48.801 "num_base_bdevs_operational": 2, 00:14:48.801 "base_bdevs_list": [ 00:14:48.801 { 00:14:48.801 "name": null, 00:14:48.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.801 "is_configured": false, 00:14:48.801 "data_offset": 0, 00:14:48.801 "data_size": 63488 00:14:48.801 }, 00:14:48.801 { 00:14:48.801 "name": "BaseBdev2", 00:14:48.801 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:48.801 "is_configured": true, 00:14:48.801 "data_offset": 2048, 00:14:48.801 "data_size": 63488 00:14:48.801 }, 00:14:48.801 { 00:14:48.801 "name": "BaseBdev3", 00:14:48.801 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:48.801 "is_configured": true, 00:14:48.801 "data_offset": 2048, 00:14:48.801 "data_size": 63488 00:14:48.801 } 00:14:48.801 ] 00:14:48.801 }' 00:14:48.801 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.801 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.801 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.801 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.801 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:48.801 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.801 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.801 [2024-11-28 02:30:22.312533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.801 [2024-11-28 02:30:22.327718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:14:48.801 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.801 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:48.801 [2024-11-28 02:30:22.334856] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.740 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.740 "name": "raid_bdev1", 00:14:49.740 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:49.740 "strip_size_kb": 64, 00:14:49.740 "state": "online", 00:14:49.740 "raid_level": "raid5f", 00:14:49.740 "superblock": true, 00:14:49.740 "num_base_bdevs": 3, 00:14:49.740 "num_base_bdevs_discovered": 3, 00:14:49.740 "num_base_bdevs_operational": 3, 00:14:49.740 "process": { 00:14:49.740 "type": "rebuild", 00:14:49.740 "target": "spare", 00:14:49.740 "progress": { 00:14:49.740 "blocks": 20480, 00:14:49.740 "percent": 16 00:14:49.740 } 00:14:49.740 }, 00:14:49.740 "base_bdevs_list": [ 00:14:49.740 { 00:14:49.740 "name": "spare", 00:14:49.741 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:49.741 "is_configured": true, 00:14:49.741 "data_offset": 2048, 00:14:49.741 "data_size": 63488 00:14:49.741 }, 00:14:49.741 { 00:14:49.741 "name": "BaseBdev2", 00:14:49.741 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:49.741 "is_configured": true, 00:14:49.741 "data_offset": 2048, 00:14:49.741 "data_size": 63488 00:14:49.741 }, 00:14:49.741 { 00:14:49.741 "name": "BaseBdev3", 00:14:49.741 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:49.741 "is_configured": true, 00:14:49.741 "data_offset": 2048, 00:14:49.741 "data_size": 63488 00:14:49.741 } 00:14:49.741 ] 00:14:49.741 }' 00:14:49.741 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:50.001 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=553 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.001 "name": "raid_bdev1", 00:14:50.001 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:50.001 "strip_size_kb": 64, 00:14:50.001 "state": "online", 00:14:50.001 "raid_level": "raid5f", 00:14:50.001 "superblock": true, 00:14:50.001 "num_base_bdevs": 3, 00:14:50.001 "num_base_bdevs_discovered": 3, 00:14:50.001 "num_base_bdevs_operational": 3, 00:14:50.001 "process": { 00:14:50.001 "type": "rebuild", 00:14:50.001 "target": "spare", 00:14:50.001 "progress": { 00:14:50.001 "blocks": 22528, 00:14:50.001 "percent": 17 00:14:50.001 } 00:14:50.001 }, 00:14:50.001 "base_bdevs_list": [ 00:14:50.001 { 00:14:50.001 "name": "spare", 00:14:50.001 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:50.001 "is_configured": true, 00:14:50.001 "data_offset": 2048, 00:14:50.001 "data_size": 63488 00:14:50.001 }, 00:14:50.001 { 00:14:50.001 "name": "BaseBdev2", 00:14:50.001 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:50.001 "is_configured": true, 00:14:50.001 "data_offset": 2048, 00:14:50.001 "data_size": 63488 00:14:50.001 }, 00:14:50.001 { 00:14:50.001 "name": "BaseBdev3", 00:14:50.001 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:50.001 "is_configured": true, 00:14:50.001 "data_offset": 2048, 00:14:50.001 "data_size": 63488 00:14:50.001 } 00:14:50.001 ] 00:14:50.001 }' 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.001 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.002 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.002 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.941 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.941 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.941 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.941 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.941 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.941 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.941 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.941 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.941 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.941 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.201 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.201 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.201 "name": "raid_bdev1", 00:14:51.201 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:51.201 "strip_size_kb": 64, 00:14:51.201 "state": "online", 00:14:51.201 "raid_level": "raid5f", 00:14:51.201 "superblock": true, 00:14:51.201 "num_base_bdevs": 3, 00:14:51.201 "num_base_bdevs_discovered": 3, 00:14:51.201 "num_base_bdevs_operational": 3, 00:14:51.201 "process": { 00:14:51.201 "type": "rebuild", 00:14:51.201 "target": "spare", 00:14:51.201 "progress": { 00:14:51.201 "blocks": 45056, 00:14:51.201 "percent": 35 00:14:51.201 } 00:14:51.201 }, 00:14:51.201 "base_bdevs_list": [ 00:14:51.201 { 00:14:51.201 "name": "spare", 00:14:51.201 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:51.201 "is_configured": true, 00:14:51.201 "data_offset": 2048, 00:14:51.201 "data_size": 63488 00:14:51.201 }, 00:14:51.201 { 00:14:51.201 "name": "BaseBdev2", 00:14:51.201 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:51.201 "is_configured": true, 00:14:51.201 "data_offset": 2048, 00:14:51.201 "data_size": 63488 00:14:51.201 }, 00:14:51.201 { 00:14:51.201 "name": "BaseBdev3", 00:14:51.201 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:51.201 "is_configured": true, 00:14:51.201 "data_offset": 2048, 00:14:51.201 "data_size": 63488 00:14:51.201 } 00:14:51.201 ] 00:14:51.201 }' 00:14:51.201 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.201 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.201 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.201 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.201 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.140 "name": "raid_bdev1", 00:14:52.140 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:52.140 "strip_size_kb": 64, 00:14:52.140 "state": "online", 00:14:52.140 "raid_level": "raid5f", 00:14:52.140 "superblock": true, 00:14:52.140 "num_base_bdevs": 3, 00:14:52.140 "num_base_bdevs_discovered": 3, 00:14:52.140 "num_base_bdevs_operational": 3, 00:14:52.140 "process": { 00:14:52.140 "type": "rebuild", 00:14:52.140 "target": "spare", 00:14:52.140 "progress": { 00:14:52.140 "blocks": 69632, 00:14:52.140 "percent": 54 00:14:52.140 } 00:14:52.140 }, 00:14:52.140 "base_bdevs_list": [ 00:14:52.140 { 00:14:52.140 "name": "spare", 00:14:52.140 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:52.140 "is_configured": true, 00:14:52.140 "data_offset": 2048, 00:14:52.140 "data_size": 63488 00:14:52.140 }, 00:14:52.140 { 00:14:52.140 "name": "BaseBdev2", 00:14:52.140 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:52.140 "is_configured": true, 00:14:52.140 "data_offset": 2048, 00:14:52.140 "data_size": 63488 00:14:52.140 }, 00:14:52.140 { 00:14:52.140 "name": "BaseBdev3", 00:14:52.140 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:52.140 "is_configured": true, 00:14:52.140 "data_offset": 2048, 00:14:52.140 "data_size": 63488 00:14:52.140 } 00:14:52.140 ] 00:14:52.140 }' 00:14:52.140 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.399 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.399 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.399 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.399 02:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.338 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.338 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.338 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.338 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.338 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.338 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.338 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.339 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.339 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.339 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.339 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.339 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.339 "name": "raid_bdev1", 00:14:53.339 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:53.339 "strip_size_kb": 64, 00:14:53.339 "state": "online", 00:14:53.339 "raid_level": "raid5f", 00:14:53.339 "superblock": true, 00:14:53.339 "num_base_bdevs": 3, 00:14:53.339 "num_base_bdevs_discovered": 3, 00:14:53.339 "num_base_bdevs_operational": 3, 00:14:53.339 "process": { 00:14:53.339 "type": "rebuild", 00:14:53.339 "target": "spare", 00:14:53.339 "progress": { 00:14:53.339 "blocks": 92160, 00:14:53.339 "percent": 72 00:14:53.339 } 00:14:53.339 }, 00:14:53.339 "base_bdevs_list": [ 00:14:53.339 { 00:14:53.339 "name": "spare", 00:14:53.339 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:53.339 "is_configured": true, 00:14:53.339 "data_offset": 2048, 00:14:53.339 "data_size": 63488 00:14:53.339 }, 00:14:53.339 { 00:14:53.339 "name": "BaseBdev2", 00:14:53.339 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:53.339 "is_configured": true, 00:14:53.339 "data_offset": 2048, 00:14:53.339 "data_size": 63488 00:14:53.339 }, 00:14:53.339 { 00:14:53.339 "name": "BaseBdev3", 00:14:53.339 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:53.339 "is_configured": true, 00:14:53.339 "data_offset": 2048, 00:14:53.339 "data_size": 63488 00:14:53.339 } 00:14:53.339 ] 00:14:53.339 }' 00:14:53.339 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.339 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.339 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.599 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.599 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.540 "name": "raid_bdev1", 00:14:54.540 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:54.540 "strip_size_kb": 64, 00:14:54.540 "state": "online", 00:14:54.540 "raid_level": "raid5f", 00:14:54.540 "superblock": true, 00:14:54.540 "num_base_bdevs": 3, 00:14:54.540 "num_base_bdevs_discovered": 3, 00:14:54.540 "num_base_bdevs_operational": 3, 00:14:54.540 "process": { 00:14:54.540 "type": "rebuild", 00:14:54.540 "target": "spare", 00:14:54.540 "progress": { 00:14:54.540 "blocks": 114688, 00:14:54.540 "percent": 90 00:14:54.540 } 00:14:54.540 }, 00:14:54.540 "base_bdevs_list": [ 00:14:54.540 { 00:14:54.540 "name": "spare", 00:14:54.540 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:54.540 "is_configured": true, 00:14:54.540 "data_offset": 2048, 00:14:54.540 "data_size": 63488 00:14:54.540 }, 00:14:54.540 { 00:14:54.540 "name": "BaseBdev2", 00:14:54.540 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:54.540 "is_configured": true, 00:14:54.540 "data_offset": 2048, 00:14:54.540 "data_size": 63488 00:14:54.540 }, 00:14:54.540 { 00:14:54.540 "name": "BaseBdev3", 00:14:54.540 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:54.540 "is_configured": true, 00:14:54.540 "data_offset": 2048, 00:14:54.540 "data_size": 63488 00:14:54.540 } 00:14:54.540 ] 00:14:54.540 }' 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.540 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.110 [2024-11-28 02:30:28.569270] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:55.110 [2024-11-28 02:30:28.569337] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:55.110 [2024-11-28 02:30:28.569420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.748 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.748 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.749 "name": "raid_bdev1", 00:14:55.749 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:55.749 "strip_size_kb": 64, 00:14:55.749 "state": "online", 00:14:55.749 "raid_level": "raid5f", 00:14:55.749 "superblock": true, 00:14:55.749 "num_base_bdevs": 3, 00:14:55.749 "num_base_bdevs_discovered": 3, 00:14:55.749 "num_base_bdevs_operational": 3, 00:14:55.749 "base_bdevs_list": [ 00:14:55.749 { 00:14:55.749 "name": "spare", 00:14:55.749 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:55.749 "is_configured": true, 00:14:55.749 "data_offset": 2048, 00:14:55.749 "data_size": 63488 00:14:55.749 }, 00:14:55.749 { 00:14:55.749 "name": "BaseBdev2", 00:14:55.749 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:55.749 "is_configured": true, 00:14:55.749 "data_offset": 2048, 00:14:55.749 "data_size": 63488 00:14:55.749 }, 00:14:55.749 { 00:14:55.749 "name": "BaseBdev3", 00:14:55.749 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:55.749 "is_configured": true, 00:14:55.749 "data_offset": 2048, 00:14:55.749 "data_size": 63488 00:14:55.749 } 00:14:55.749 ] 00:14:55.749 }' 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.749 "name": "raid_bdev1", 00:14:55.749 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:55.749 "strip_size_kb": 64, 00:14:55.749 "state": "online", 00:14:55.749 "raid_level": "raid5f", 00:14:55.749 "superblock": true, 00:14:55.749 "num_base_bdevs": 3, 00:14:55.749 "num_base_bdevs_discovered": 3, 00:14:55.749 "num_base_bdevs_operational": 3, 00:14:55.749 "base_bdevs_list": [ 00:14:55.749 { 00:14:55.749 "name": "spare", 00:14:55.749 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:55.749 "is_configured": true, 00:14:55.749 "data_offset": 2048, 00:14:55.749 "data_size": 63488 00:14:55.749 }, 00:14:55.749 { 00:14:55.749 "name": "BaseBdev2", 00:14:55.749 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:55.749 "is_configured": true, 00:14:55.749 "data_offset": 2048, 00:14:55.749 "data_size": 63488 00:14:55.749 }, 00:14:55.749 { 00:14:55.749 "name": "BaseBdev3", 00:14:55.749 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:55.749 "is_configured": true, 00:14:55.749 "data_offset": 2048, 00:14:55.749 "data_size": 63488 00:14:55.749 } 00:14:55.749 ] 00:14:55.749 }' 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.749 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.020 "name": "raid_bdev1", 00:14:56.020 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:56.020 "strip_size_kb": 64, 00:14:56.020 "state": "online", 00:14:56.020 "raid_level": "raid5f", 00:14:56.020 "superblock": true, 00:14:56.020 "num_base_bdevs": 3, 00:14:56.020 "num_base_bdevs_discovered": 3, 00:14:56.020 "num_base_bdevs_operational": 3, 00:14:56.020 "base_bdevs_list": [ 00:14:56.020 { 00:14:56.020 "name": "spare", 00:14:56.020 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:56.020 "is_configured": true, 00:14:56.020 "data_offset": 2048, 00:14:56.020 "data_size": 63488 00:14:56.020 }, 00:14:56.020 { 00:14:56.020 "name": "BaseBdev2", 00:14:56.020 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:56.020 "is_configured": true, 00:14:56.020 "data_offset": 2048, 00:14:56.020 "data_size": 63488 00:14:56.020 }, 00:14:56.020 { 00:14:56.020 "name": "BaseBdev3", 00:14:56.020 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:56.020 "is_configured": true, 00:14:56.020 "data_offset": 2048, 00:14:56.020 "data_size": 63488 00:14:56.020 } 00:14:56.020 ] 00:14:56.020 }' 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.020 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.280 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:56.280 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.280 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.280 [2024-11-28 02:30:29.920298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.280 [2024-11-28 02:30:29.920331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.280 [2024-11-28 02:30:29.920415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.280 [2024-11-28 02:30:29.920502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.280 [2024-11-28 02:30:29.920521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:56.280 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.280 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.280 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.280 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.280 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:56.280 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:56.540 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:56.541 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:56.541 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:56.541 /dev/nbd0 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.541 1+0 records in 00:14:56.541 1+0 records out 00:14:56.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254935 s, 16.1 MB/s 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:56.541 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:56.801 /dev/nbd1 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.801 1+0 records in 00:14:56.801 1+0 records out 00:14:56.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330724 s, 12.4 MB/s 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:56.801 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:57.061 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:57.061 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.061 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:57.061 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:57.061 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:57.061 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.061 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:57.321 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:57.321 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:57.321 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:57.321 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.321 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.321 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:57.321 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:57.321 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.321 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.321 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.581 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.581 [2024-11-28 02:30:31.053476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:57.582 [2024-11-28 02:30:31.053534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.582 [2024-11-28 02:30:31.053569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:57.582 [2024-11-28 02:30:31.053580] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.582 [2024-11-28 02:30:31.055834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.582 [2024-11-28 02:30:31.055874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:57.582 [2024-11-28 02:30:31.055977] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:57.582 [2024-11-28 02:30:31.056031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.582 [2024-11-28 02:30:31.056186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.582 [2024-11-28 02:30:31.056288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.582 spare 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.582 [2024-11-28 02:30:31.156193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:57.582 [2024-11-28 02:30:31.156222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:57.582 [2024-11-28 02:30:31.156482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:14:57.582 [2024-11-28 02:30:31.161819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:57.582 [2024-11-28 02:30:31.161842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:57.582 [2024-11-28 02:30:31.162025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.582 "name": "raid_bdev1", 00:14:57.582 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:57.582 "strip_size_kb": 64, 00:14:57.582 "state": "online", 00:14:57.582 "raid_level": "raid5f", 00:14:57.582 "superblock": true, 00:14:57.582 "num_base_bdevs": 3, 00:14:57.582 "num_base_bdevs_discovered": 3, 00:14:57.582 "num_base_bdevs_operational": 3, 00:14:57.582 "base_bdevs_list": [ 00:14:57.582 { 00:14:57.582 "name": "spare", 00:14:57.582 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:57.582 "is_configured": true, 00:14:57.582 "data_offset": 2048, 00:14:57.582 "data_size": 63488 00:14:57.582 }, 00:14:57.582 { 00:14:57.582 "name": "BaseBdev2", 00:14:57.582 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:57.582 "is_configured": true, 00:14:57.582 "data_offset": 2048, 00:14:57.582 "data_size": 63488 00:14:57.582 }, 00:14:57.582 { 00:14:57.582 "name": "BaseBdev3", 00:14:57.582 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:57.582 "is_configured": true, 00:14:57.582 "data_offset": 2048, 00:14:57.582 "data_size": 63488 00:14:57.582 } 00:14:57.582 ] 00:14:57.582 }' 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.582 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.152 "name": "raid_bdev1", 00:14:58.152 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:58.152 "strip_size_kb": 64, 00:14:58.152 "state": "online", 00:14:58.152 "raid_level": "raid5f", 00:14:58.152 "superblock": true, 00:14:58.152 "num_base_bdevs": 3, 00:14:58.152 "num_base_bdevs_discovered": 3, 00:14:58.152 "num_base_bdevs_operational": 3, 00:14:58.152 "base_bdevs_list": [ 00:14:58.152 { 00:14:58.152 "name": "spare", 00:14:58.152 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:14:58.152 "is_configured": true, 00:14:58.152 "data_offset": 2048, 00:14:58.152 "data_size": 63488 00:14:58.152 }, 00:14:58.152 { 00:14:58.152 "name": "BaseBdev2", 00:14:58.152 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:58.152 "is_configured": true, 00:14:58.152 "data_offset": 2048, 00:14:58.152 "data_size": 63488 00:14:58.152 }, 00:14:58.152 { 00:14:58.152 "name": "BaseBdev3", 00:14:58.152 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:58.152 "is_configured": true, 00:14:58.152 "data_offset": 2048, 00:14:58.152 "data_size": 63488 00:14:58.152 } 00:14:58.152 ] 00:14:58.152 }' 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:58.152 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.412 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.412 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:58.412 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.412 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.412 [2024-11-28 02:30:31.846782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.412 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.412 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:58.412 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.412 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.412 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.412 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.413 "name": "raid_bdev1", 00:14:58.413 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:14:58.413 "strip_size_kb": 64, 00:14:58.413 "state": "online", 00:14:58.413 "raid_level": "raid5f", 00:14:58.413 "superblock": true, 00:14:58.413 "num_base_bdevs": 3, 00:14:58.413 "num_base_bdevs_discovered": 2, 00:14:58.413 "num_base_bdevs_operational": 2, 00:14:58.413 "base_bdevs_list": [ 00:14:58.413 { 00:14:58.413 "name": null, 00:14:58.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.413 "is_configured": false, 00:14:58.413 "data_offset": 0, 00:14:58.413 "data_size": 63488 00:14:58.413 }, 00:14:58.413 { 00:14:58.413 "name": "BaseBdev2", 00:14:58.413 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:14:58.413 "is_configured": true, 00:14:58.413 "data_offset": 2048, 00:14:58.413 "data_size": 63488 00:14:58.413 }, 00:14:58.413 { 00:14:58.413 "name": "BaseBdev3", 00:14:58.413 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:14:58.413 "is_configured": true, 00:14:58.413 "data_offset": 2048, 00:14:58.413 "data_size": 63488 00:14:58.413 } 00:14:58.413 ] 00:14:58.413 }' 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.413 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.673 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:58.673 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.673 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.673 [2024-11-28 02:30:32.294063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:58.673 [2024-11-28 02:30:32.294253] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:58.673 [2024-11-28 02:30:32.294274] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:58.673 [2024-11-28 02:30:32.294319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:58.673 [2024-11-28 02:30:32.310238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:14:58.673 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.673 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:58.673 [2024-11-28 02:30:32.318270] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.055 "name": "raid_bdev1", 00:15:00.055 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:15:00.055 "strip_size_kb": 64, 00:15:00.055 "state": "online", 00:15:00.055 "raid_level": "raid5f", 00:15:00.055 "superblock": true, 00:15:00.055 "num_base_bdevs": 3, 00:15:00.055 "num_base_bdevs_discovered": 3, 00:15:00.055 "num_base_bdevs_operational": 3, 00:15:00.055 "process": { 00:15:00.055 "type": "rebuild", 00:15:00.055 "target": "spare", 00:15:00.055 "progress": { 00:15:00.055 "blocks": 20480, 00:15:00.055 "percent": 16 00:15:00.055 } 00:15:00.055 }, 00:15:00.055 "base_bdevs_list": [ 00:15:00.055 { 00:15:00.055 "name": "spare", 00:15:00.055 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:15:00.055 "is_configured": true, 00:15:00.055 "data_offset": 2048, 00:15:00.055 "data_size": 63488 00:15:00.055 }, 00:15:00.055 { 00:15:00.055 "name": "BaseBdev2", 00:15:00.055 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:15:00.055 "is_configured": true, 00:15:00.055 "data_offset": 2048, 00:15:00.055 "data_size": 63488 00:15:00.055 }, 00:15:00.055 { 00:15:00.055 "name": "BaseBdev3", 00:15:00.055 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:15:00.055 "is_configured": true, 00:15:00.055 "data_offset": 2048, 00:15:00.055 "data_size": 63488 00:15:00.055 } 00:15:00.055 ] 00:15:00.055 }' 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.055 [2024-11-28 02:30:33.477492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.055 [2024-11-28 02:30:33.525545] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:00.055 [2024-11-28 02:30:33.525619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.055 [2024-11-28 02:30:33.525634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.055 [2024-11-28 02:30:33.525643] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.055 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.055 "name": "raid_bdev1", 00:15:00.055 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:15:00.055 "strip_size_kb": 64, 00:15:00.055 "state": "online", 00:15:00.055 "raid_level": "raid5f", 00:15:00.056 "superblock": true, 00:15:00.056 "num_base_bdevs": 3, 00:15:00.056 "num_base_bdevs_discovered": 2, 00:15:00.056 "num_base_bdevs_operational": 2, 00:15:00.056 "base_bdevs_list": [ 00:15:00.056 { 00:15:00.056 "name": null, 00:15:00.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.056 "is_configured": false, 00:15:00.056 "data_offset": 0, 00:15:00.056 "data_size": 63488 00:15:00.056 }, 00:15:00.056 { 00:15:00.056 "name": "BaseBdev2", 00:15:00.056 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:15:00.056 "is_configured": true, 00:15:00.056 "data_offset": 2048, 00:15:00.056 "data_size": 63488 00:15:00.056 }, 00:15:00.056 { 00:15:00.056 "name": "BaseBdev3", 00:15:00.056 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:15:00.056 "is_configured": true, 00:15:00.056 "data_offset": 2048, 00:15:00.056 "data_size": 63488 00:15:00.056 } 00:15:00.056 ] 00:15:00.056 }' 00:15:00.056 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.056 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.625 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:00.625 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.625 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.625 [2024-11-28 02:30:34.034327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:00.625 [2024-11-28 02:30:34.034388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.625 [2024-11-28 02:30:34.034408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:00.626 [2024-11-28 02:30:34.034421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.626 [2024-11-28 02:30:34.034899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.626 [2024-11-28 02:30:34.034942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:00.626 [2024-11-28 02:30:34.035040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:00.626 [2024-11-28 02:30:34.035066] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:00.626 [2024-11-28 02:30:34.035077] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:00.626 [2024-11-28 02:30:34.035100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.626 [2024-11-28 02:30:34.050086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:00.626 spare 00:15:00.626 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.626 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:00.626 [2024-11-28 02:30:34.057533] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.565 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.565 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.565 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.565 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.565 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.565 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.565 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.566 "name": "raid_bdev1", 00:15:01.566 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:15:01.566 "strip_size_kb": 64, 00:15:01.566 "state": "online", 00:15:01.566 "raid_level": "raid5f", 00:15:01.566 "superblock": true, 00:15:01.566 "num_base_bdevs": 3, 00:15:01.566 "num_base_bdevs_discovered": 3, 00:15:01.566 "num_base_bdevs_operational": 3, 00:15:01.566 "process": { 00:15:01.566 "type": "rebuild", 00:15:01.566 "target": "spare", 00:15:01.566 "progress": { 00:15:01.566 "blocks": 20480, 00:15:01.566 "percent": 16 00:15:01.566 } 00:15:01.566 }, 00:15:01.566 "base_bdevs_list": [ 00:15:01.566 { 00:15:01.566 "name": "spare", 00:15:01.566 "uuid": "187eeeb5-a651-5300-b0c5-388ab501dd33", 00:15:01.566 "is_configured": true, 00:15:01.566 "data_offset": 2048, 00:15:01.566 "data_size": 63488 00:15:01.566 }, 00:15:01.566 { 00:15:01.566 "name": "BaseBdev2", 00:15:01.566 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:15:01.566 "is_configured": true, 00:15:01.566 "data_offset": 2048, 00:15:01.566 "data_size": 63488 00:15:01.566 }, 00:15:01.566 { 00:15:01.566 "name": "BaseBdev3", 00:15:01.566 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:15:01.566 "is_configured": true, 00:15:01.566 "data_offset": 2048, 00:15:01.566 "data_size": 63488 00:15:01.566 } 00:15:01.566 ] 00:15:01.566 }' 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.566 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.566 [2024-11-28 02:30:35.208866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.825 [2024-11-28 02:30:35.264954] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:01.825 [2024-11-28 02:30:35.265065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.825 [2024-11-28 02:30:35.265085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.825 [2024-11-28 02:30:35.265092] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.825 "name": "raid_bdev1", 00:15:01.825 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:15:01.825 "strip_size_kb": 64, 00:15:01.825 "state": "online", 00:15:01.825 "raid_level": "raid5f", 00:15:01.825 "superblock": true, 00:15:01.825 "num_base_bdevs": 3, 00:15:01.825 "num_base_bdevs_discovered": 2, 00:15:01.825 "num_base_bdevs_operational": 2, 00:15:01.825 "base_bdevs_list": [ 00:15:01.825 { 00:15:01.825 "name": null, 00:15:01.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.825 "is_configured": false, 00:15:01.825 "data_offset": 0, 00:15:01.825 "data_size": 63488 00:15:01.825 }, 00:15:01.825 { 00:15:01.825 "name": "BaseBdev2", 00:15:01.825 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:15:01.825 "is_configured": true, 00:15:01.825 "data_offset": 2048, 00:15:01.825 "data_size": 63488 00:15:01.825 }, 00:15:01.825 { 00:15:01.825 "name": "BaseBdev3", 00:15:01.825 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:15:01.825 "is_configured": true, 00:15:01.825 "data_offset": 2048, 00:15:01.825 "data_size": 63488 00:15:01.825 } 00:15:01.825 ] 00:15:01.825 }' 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.825 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.084 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.084 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.084 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.084 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.084 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.084 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.084 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.084 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.344 "name": "raid_bdev1", 00:15:02.344 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:15:02.344 "strip_size_kb": 64, 00:15:02.344 "state": "online", 00:15:02.344 "raid_level": "raid5f", 00:15:02.344 "superblock": true, 00:15:02.344 "num_base_bdevs": 3, 00:15:02.344 "num_base_bdevs_discovered": 2, 00:15:02.344 "num_base_bdevs_operational": 2, 00:15:02.344 "base_bdevs_list": [ 00:15:02.344 { 00:15:02.344 "name": null, 00:15:02.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.344 "is_configured": false, 00:15:02.344 "data_offset": 0, 00:15:02.344 "data_size": 63488 00:15:02.344 }, 00:15:02.344 { 00:15:02.344 "name": "BaseBdev2", 00:15:02.344 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:15:02.344 "is_configured": true, 00:15:02.344 "data_offset": 2048, 00:15:02.344 "data_size": 63488 00:15:02.344 }, 00:15:02.344 { 00:15:02.344 "name": "BaseBdev3", 00:15:02.344 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:15:02.344 "is_configured": true, 00:15:02.344 "data_offset": 2048, 00:15:02.344 "data_size": 63488 00:15:02.344 } 00:15:02.344 ] 00:15:02.344 }' 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.344 [2024-11-28 02:30:35.905097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:02.344 [2024-11-28 02:30:35.905152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.344 [2024-11-28 02:30:35.905175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:02.344 [2024-11-28 02:30:35.905194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.344 [2024-11-28 02:30:35.905627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.344 [2024-11-28 02:30:35.905644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:02.344 [2024-11-28 02:30:35.905715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:02.344 [2024-11-28 02:30:35.905729] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:02.344 [2024-11-28 02:30:35.905750] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:02.344 [2024-11-28 02:30:35.905761] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:02.344 BaseBdev1 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.344 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.284 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.544 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.544 "name": "raid_bdev1", 00:15:03.544 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:15:03.544 "strip_size_kb": 64, 00:15:03.544 "state": "online", 00:15:03.544 "raid_level": "raid5f", 00:15:03.544 "superblock": true, 00:15:03.544 "num_base_bdevs": 3, 00:15:03.544 "num_base_bdevs_discovered": 2, 00:15:03.544 "num_base_bdevs_operational": 2, 00:15:03.544 "base_bdevs_list": [ 00:15:03.544 { 00:15:03.544 "name": null, 00:15:03.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.544 "is_configured": false, 00:15:03.544 "data_offset": 0, 00:15:03.544 "data_size": 63488 00:15:03.544 }, 00:15:03.544 { 00:15:03.544 "name": "BaseBdev2", 00:15:03.544 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:15:03.544 "is_configured": true, 00:15:03.544 "data_offset": 2048, 00:15:03.544 "data_size": 63488 00:15:03.544 }, 00:15:03.544 { 00:15:03.544 "name": "BaseBdev3", 00:15:03.544 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:15:03.544 "is_configured": true, 00:15:03.544 "data_offset": 2048, 00:15:03.544 "data_size": 63488 00:15:03.544 } 00:15:03.544 ] 00:15:03.544 }' 00:15:03.544 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.544 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.805 "name": "raid_bdev1", 00:15:03.805 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:15:03.805 "strip_size_kb": 64, 00:15:03.805 "state": "online", 00:15:03.805 "raid_level": "raid5f", 00:15:03.805 "superblock": true, 00:15:03.805 "num_base_bdevs": 3, 00:15:03.805 "num_base_bdevs_discovered": 2, 00:15:03.805 "num_base_bdevs_operational": 2, 00:15:03.805 "base_bdevs_list": [ 00:15:03.805 { 00:15:03.805 "name": null, 00:15:03.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.805 "is_configured": false, 00:15:03.805 "data_offset": 0, 00:15:03.805 "data_size": 63488 00:15:03.805 }, 00:15:03.805 { 00:15:03.805 "name": "BaseBdev2", 00:15:03.805 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:15:03.805 "is_configured": true, 00:15:03.805 "data_offset": 2048, 00:15:03.805 "data_size": 63488 00:15:03.805 }, 00:15:03.805 { 00:15:03.805 "name": "BaseBdev3", 00:15:03.805 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:15:03.805 "is_configured": true, 00:15:03.805 "data_offset": 2048, 00:15:03.805 "data_size": 63488 00:15:03.805 } 00:15:03.805 ] 00:15:03.805 }' 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.805 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.066 [2024-11-28 02:30:37.514522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.066 [2024-11-28 02:30:37.514721] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:04.066 [2024-11-28 02:30:37.514803] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:04.066 request: 00:15:04.066 { 00:15:04.066 "base_bdev": "BaseBdev1", 00:15:04.066 "raid_bdev": "raid_bdev1", 00:15:04.066 "method": "bdev_raid_add_base_bdev", 00:15:04.066 "req_id": 1 00:15:04.066 } 00:15:04.066 Got JSON-RPC error response 00:15:04.066 response: 00:15:04.066 { 00:15:04.066 "code": -22, 00:15:04.066 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:04.066 } 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:04.066 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.013 "name": "raid_bdev1", 00:15:05.013 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:15:05.013 "strip_size_kb": 64, 00:15:05.013 "state": "online", 00:15:05.013 "raid_level": "raid5f", 00:15:05.013 "superblock": true, 00:15:05.013 "num_base_bdevs": 3, 00:15:05.013 "num_base_bdevs_discovered": 2, 00:15:05.013 "num_base_bdevs_operational": 2, 00:15:05.013 "base_bdevs_list": [ 00:15:05.013 { 00:15:05.013 "name": null, 00:15:05.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.013 "is_configured": false, 00:15:05.013 "data_offset": 0, 00:15:05.013 "data_size": 63488 00:15:05.013 }, 00:15:05.013 { 00:15:05.013 "name": "BaseBdev2", 00:15:05.013 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:15:05.013 "is_configured": true, 00:15:05.013 "data_offset": 2048, 00:15:05.013 "data_size": 63488 00:15:05.013 }, 00:15:05.013 { 00:15:05.013 "name": "BaseBdev3", 00:15:05.013 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:15:05.013 "is_configured": true, 00:15:05.013 "data_offset": 2048, 00:15:05.013 "data_size": 63488 00:15:05.013 } 00:15:05.013 ] 00:15:05.013 }' 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.013 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.648 "name": "raid_bdev1", 00:15:05.648 "uuid": "6ca34db6-8406-4293-8358-c19687a7402a", 00:15:05.648 "strip_size_kb": 64, 00:15:05.648 "state": "online", 00:15:05.648 "raid_level": "raid5f", 00:15:05.648 "superblock": true, 00:15:05.648 "num_base_bdevs": 3, 00:15:05.648 "num_base_bdevs_discovered": 2, 00:15:05.648 "num_base_bdevs_operational": 2, 00:15:05.648 "base_bdevs_list": [ 00:15:05.648 { 00:15:05.648 "name": null, 00:15:05.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.648 "is_configured": false, 00:15:05.648 "data_offset": 0, 00:15:05.648 "data_size": 63488 00:15:05.648 }, 00:15:05.648 { 00:15:05.648 "name": "BaseBdev2", 00:15:05.648 "uuid": "546182ed-0ae7-5e1d-afb3-0d1f3164ce8f", 00:15:05.648 "is_configured": true, 00:15:05.648 "data_offset": 2048, 00:15:05.648 "data_size": 63488 00:15:05.648 }, 00:15:05.648 { 00:15:05.648 "name": "BaseBdev3", 00:15:05.648 "uuid": "6d78955f-734b-574a-8f8f-7c7d586487f5", 00:15:05.648 "is_configured": true, 00:15:05.648 "data_offset": 2048, 00:15:05.648 "data_size": 63488 00:15:05.648 } 00:15:05.648 ] 00:15:05.648 }' 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81774 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81774 ']' 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81774 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81774 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.648 killing process with pid 81774 00:15:05.648 Received shutdown signal, test time was about 60.000000 seconds 00:15:05.648 00:15:05.648 Latency(us) 00:15:05.648 [2024-11-28T02:30:39.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.648 [2024-11-28T02:30:39.327Z] =================================================================================================================== 00:15:05.648 [2024-11-28T02:30:39.327Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81774' 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81774 00:15:05.648 [2024-11-28 02:30:39.182085] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.648 [2024-11-28 02:30:39.182205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.648 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81774 00:15:05.648 [2024-11-28 02:30:39.182266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.648 [2024-11-28 02:30:39.182278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:05.908 [2024-11-28 02:30:39.548749] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.289 02:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:07.289 00:15:07.289 real 0m23.126s 00:15:07.289 user 0m29.808s 00:15:07.289 sys 0m2.633s 00:15:07.289 02:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.289 ************************************ 00:15:07.289 END TEST raid5f_rebuild_test_sb 00:15:07.289 ************************************ 00:15:07.289 02:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.289 02:30:40 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:07.289 02:30:40 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:07.289 02:30:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:07.289 02:30:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.289 02:30:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.289 ************************************ 00:15:07.289 START TEST raid5f_state_function_test 00:15:07.289 ************************************ 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.289 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82523 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82523' 00:15:07.290 Process raid pid: 82523 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82523 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82523 ']' 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.290 02:30:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.290 [2024-11-28 02:30:40.752788] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:07.290 [2024-11-28 02:30:40.753015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.290 [2024-11-28 02:30:40.929846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.550 [2024-11-28 02:30:41.038752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.550 [2024-11-28 02:30:41.223481] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.550 [2024-11-28 02:30:41.223509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.120 [2024-11-28 02:30:41.567293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:08.120 [2024-11-28 02:30:41.567360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:08.120 [2024-11-28 02:30:41.567370] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.120 [2024-11-28 02:30:41.567379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.120 [2024-11-28 02:30:41.567386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:08.120 [2024-11-28 02:30:41.567395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:08.120 [2024-11-28 02:30:41.567402] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:08.120 [2024-11-28 02:30:41.567410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.120 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.120 "name": "Existed_Raid", 00:15:08.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.120 "strip_size_kb": 64, 00:15:08.120 "state": "configuring", 00:15:08.120 "raid_level": "raid5f", 00:15:08.120 "superblock": false, 00:15:08.120 "num_base_bdevs": 4, 00:15:08.120 "num_base_bdevs_discovered": 0, 00:15:08.120 "num_base_bdevs_operational": 4, 00:15:08.120 "base_bdevs_list": [ 00:15:08.120 { 00:15:08.120 "name": "BaseBdev1", 00:15:08.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.120 "is_configured": false, 00:15:08.120 "data_offset": 0, 00:15:08.120 "data_size": 0 00:15:08.120 }, 00:15:08.120 { 00:15:08.120 "name": "BaseBdev2", 00:15:08.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.120 "is_configured": false, 00:15:08.120 "data_offset": 0, 00:15:08.121 "data_size": 0 00:15:08.121 }, 00:15:08.121 { 00:15:08.121 "name": "BaseBdev3", 00:15:08.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.121 "is_configured": false, 00:15:08.121 "data_offset": 0, 00:15:08.121 "data_size": 0 00:15:08.121 }, 00:15:08.121 { 00:15:08.121 "name": "BaseBdev4", 00:15:08.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.121 "is_configured": false, 00:15:08.121 "data_offset": 0, 00:15:08.121 "data_size": 0 00:15:08.121 } 00:15:08.121 ] 00:15:08.121 }' 00:15:08.121 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.121 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.381 [2024-11-28 02:30:41.918599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.381 [2024-11-28 02:30:41.918675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.381 [2024-11-28 02:30:41.930595] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:08.381 [2024-11-28 02:30:41.930671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:08.381 [2024-11-28 02:30:41.930697] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.381 [2024-11-28 02:30:41.930719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.381 [2024-11-28 02:30:41.930736] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:08.381 [2024-11-28 02:30:41.930755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:08.381 [2024-11-28 02:30:41.930772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:08.381 [2024-11-28 02:30:41.930806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.381 [2024-11-28 02:30:41.972408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.381 BaseBdev1 00:15:08.381 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.382 02:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.382 [ 00:15:08.382 { 00:15:08.382 "name": "BaseBdev1", 00:15:08.382 "aliases": [ 00:15:08.382 "6d998027-dbcf-4339-93e6-71d3417e333c" 00:15:08.382 ], 00:15:08.382 "product_name": "Malloc disk", 00:15:08.382 "block_size": 512, 00:15:08.382 "num_blocks": 65536, 00:15:08.382 "uuid": "6d998027-dbcf-4339-93e6-71d3417e333c", 00:15:08.382 "assigned_rate_limits": { 00:15:08.382 "rw_ios_per_sec": 0, 00:15:08.382 "rw_mbytes_per_sec": 0, 00:15:08.382 "r_mbytes_per_sec": 0, 00:15:08.382 "w_mbytes_per_sec": 0 00:15:08.382 }, 00:15:08.382 "claimed": true, 00:15:08.382 "claim_type": "exclusive_write", 00:15:08.382 "zoned": false, 00:15:08.382 "supported_io_types": { 00:15:08.382 "read": true, 00:15:08.382 "write": true, 00:15:08.382 "unmap": true, 00:15:08.382 "flush": true, 00:15:08.382 "reset": true, 00:15:08.382 "nvme_admin": false, 00:15:08.382 "nvme_io": false, 00:15:08.382 "nvme_io_md": false, 00:15:08.382 "write_zeroes": true, 00:15:08.382 "zcopy": true, 00:15:08.382 "get_zone_info": false, 00:15:08.382 "zone_management": false, 00:15:08.382 "zone_append": false, 00:15:08.382 "compare": false, 00:15:08.382 "compare_and_write": false, 00:15:08.382 "abort": true, 00:15:08.382 "seek_hole": false, 00:15:08.382 "seek_data": false, 00:15:08.382 "copy": true, 00:15:08.382 "nvme_iov_md": false 00:15:08.382 }, 00:15:08.382 "memory_domains": [ 00:15:08.382 { 00:15:08.382 "dma_device_id": "system", 00:15:08.382 "dma_device_type": 1 00:15:08.382 }, 00:15:08.382 { 00:15:08.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.382 "dma_device_type": 2 00:15:08.382 } 00:15:08.382 ], 00:15:08.382 "driver_specific": {} 00:15:08.382 } 00:15:08.382 ] 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.382 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.643 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.643 "name": "Existed_Raid", 00:15:08.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.643 "strip_size_kb": 64, 00:15:08.643 "state": "configuring", 00:15:08.643 "raid_level": "raid5f", 00:15:08.643 "superblock": false, 00:15:08.643 "num_base_bdevs": 4, 00:15:08.643 "num_base_bdevs_discovered": 1, 00:15:08.643 "num_base_bdevs_operational": 4, 00:15:08.643 "base_bdevs_list": [ 00:15:08.643 { 00:15:08.643 "name": "BaseBdev1", 00:15:08.643 "uuid": "6d998027-dbcf-4339-93e6-71d3417e333c", 00:15:08.643 "is_configured": true, 00:15:08.643 "data_offset": 0, 00:15:08.643 "data_size": 65536 00:15:08.643 }, 00:15:08.643 { 00:15:08.643 "name": "BaseBdev2", 00:15:08.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.643 "is_configured": false, 00:15:08.643 "data_offset": 0, 00:15:08.643 "data_size": 0 00:15:08.643 }, 00:15:08.643 { 00:15:08.643 "name": "BaseBdev3", 00:15:08.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.643 "is_configured": false, 00:15:08.643 "data_offset": 0, 00:15:08.643 "data_size": 0 00:15:08.643 }, 00:15:08.643 { 00:15:08.643 "name": "BaseBdev4", 00:15:08.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.643 "is_configured": false, 00:15:08.643 "data_offset": 0, 00:15:08.643 "data_size": 0 00:15:08.643 } 00:15:08.643 ] 00:15:08.643 }' 00:15:08.643 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.643 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.904 [2024-11-28 02:30:42.467670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.904 [2024-11-28 02:30:42.467761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.904 [2024-11-28 02:30:42.475699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.904 [2024-11-28 02:30:42.477458] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.904 [2024-11-28 02:30:42.477531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.904 [2024-11-28 02:30:42.477559] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:08.904 [2024-11-28 02:30:42.477582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:08.904 [2024-11-28 02:30:42.477600] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:08.904 [2024-11-28 02:30:42.477619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.904 "name": "Existed_Raid", 00:15:08.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.904 "strip_size_kb": 64, 00:15:08.904 "state": "configuring", 00:15:08.904 "raid_level": "raid5f", 00:15:08.904 "superblock": false, 00:15:08.904 "num_base_bdevs": 4, 00:15:08.904 "num_base_bdevs_discovered": 1, 00:15:08.904 "num_base_bdevs_operational": 4, 00:15:08.904 "base_bdevs_list": [ 00:15:08.904 { 00:15:08.904 "name": "BaseBdev1", 00:15:08.904 "uuid": "6d998027-dbcf-4339-93e6-71d3417e333c", 00:15:08.904 "is_configured": true, 00:15:08.904 "data_offset": 0, 00:15:08.904 "data_size": 65536 00:15:08.904 }, 00:15:08.904 { 00:15:08.904 "name": "BaseBdev2", 00:15:08.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.904 "is_configured": false, 00:15:08.904 "data_offset": 0, 00:15:08.904 "data_size": 0 00:15:08.904 }, 00:15:08.904 { 00:15:08.904 "name": "BaseBdev3", 00:15:08.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.904 "is_configured": false, 00:15:08.904 "data_offset": 0, 00:15:08.904 "data_size": 0 00:15:08.904 }, 00:15:08.904 { 00:15:08.904 "name": "BaseBdev4", 00:15:08.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.904 "is_configured": false, 00:15:08.904 "data_offset": 0, 00:15:08.904 "data_size": 0 00:15:08.904 } 00:15:08.904 ] 00:15:08.904 }' 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.904 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.475 [2024-11-28 02:30:42.960424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.475 BaseBdev2 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.475 02:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.475 [ 00:15:09.476 { 00:15:09.476 "name": "BaseBdev2", 00:15:09.476 "aliases": [ 00:15:09.476 "d72af631-52a3-41b1-96cd-438158822ec6" 00:15:09.476 ], 00:15:09.476 "product_name": "Malloc disk", 00:15:09.476 "block_size": 512, 00:15:09.476 "num_blocks": 65536, 00:15:09.476 "uuid": "d72af631-52a3-41b1-96cd-438158822ec6", 00:15:09.476 "assigned_rate_limits": { 00:15:09.476 "rw_ios_per_sec": 0, 00:15:09.476 "rw_mbytes_per_sec": 0, 00:15:09.476 "r_mbytes_per_sec": 0, 00:15:09.476 "w_mbytes_per_sec": 0 00:15:09.476 }, 00:15:09.476 "claimed": true, 00:15:09.476 "claim_type": "exclusive_write", 00:15:09.476 "zoned": false, 00:15:09.476 "supported_io_types": { 00:15:09.476 "read": true, 00:15:09.476 "write": true, 00:15:09.476 "unmap": true, 00:15:09.476 "flush": true, 00:15:09.476 "reset": true, 00:15:09.476 "nvme_admin": false, 00:15:09.476 "nvme_io": false, 00:15:09.476 "nvme_io_md": false, 00:15:09.476 "write_zeroes": true, 00:15:09.476 "zcopy": true, 00:15:09.476 "get_zone_info": false, 00:15:09.476 "zone_management": false, 00:15:09.476 "zone_append": false, 00:15:09.476 "compare": false, 00:15:09.476 "compare_and_write": false, 00:15:09.476 "abort": true, 00:15:09.476 "seek_hole": false, 00:15:09.476 "seek_data": false, 00:15:09.476 "copy": true, 00:15:09.476 "nvme_iov_md": false 00:15:09.476 }, 00:15:09.476 "memory_domains": [ 00:15:09.476 { 00:15:09.476 "dma_device_id": "system", 00:15:09.476 "dma_device_type": 1 00:15:09.476 }, 00:15:09.476 { 00:15:09.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.476 "dma_device_type": 2 00:15:09.476 } 00:15:09.476 ], 00:15:09.476 "driver_specific": {} 00:15:09.476 } 00:15:09.476 ] 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.476 "name": "Existed_Raid", 00:15:09.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.476 "strip_size_kb": 64, 00:15:09.476 "state": "configuring", 00:15:09.476 "raid_level": "raid5f", 00:15:09.476 "superblock": false, 00:15:09.476 "num_base_bdevs": 4, 00:15:09.476 "num_base_bdevs_discovered": 2, 00:15:09.476 "num_base_bdevs_operational": 4, 00:15:09.476 "base_bdevs_list": [ 00:15:09.476 { 00:15:09.476 "name": "BaseBdev1", 00:15:09.476 "uuid": "6d998027-dbcf-4339-93e6-71d3417e333c", 00:15:09.476 "is_configured": true, 00:15:09.476 "data_offset": 0, 00:15:09.476 "data_size": 65536 00:15:09.476 }, 00:15:09.476 { 00:15:09.476 "name": "BaseBdev2", 00:15:09.476 "uuid": "d72af631-52a3-41b1-96cd-438158822ec6", 00:15:09.476 "is_configured": true, 00:15:09.476 "data_offset": 0, 00:15:09.476 "data_size": 65536 00:15:09.476 }, 00:15:09.476 { 00:15:09.476 "name": "BaseBdev3", 00:15:09.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.476 "is_configured": false, 00:15:09.476 "data_offset": 0, 00:15:09.476 "data_size": 0 00:15:09.476 }, 00:15:09.476 { 00:15:09.476 "name": "BaseBdev4", 00:15:09.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.476 "is_configured": false, 00:15:09.476 "data_offset": 0, 00:15:09.476 "data_size": 0 00:15:09.476 } 00:15:09.476 ] 00:15:09.476 }' 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.476 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.047 [2024-11-28 02:30:43.523160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.047 BaseBdev3 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.047 [ 00:15:10.047 { 00:15:10.047 "name": "BaseBdev3", 00:15:10.047 "aliases": [ 00:15:10.047 "816aa3eb-038f-4e39-b51c-a5f158647e96" 00:15:10.047 ], 00:15:10.047 "product_name": "Malloc disk", 00:15:10.047 "block_size": 512, 00:15:10.047 "num_blocks": 65536, 00:15:10.047 "uuid": "816aa3eb-038f-4e39-b51c-a5f158647e96", 00:15:10.047 "assigned_rate_limits": { 00:15:10.047 "rw_ios_per_sec": 0, 00:15:10.047 "rw_mbytes_per_sec": 0, 00:15:10.047 "r_mbytes_per_sec": 0, 00:15:10.047 "w_mbytes_per_sec": 0 00:15:10.047 }, 00:15:10.047 "claimed": true, 00:15:10.047 "claim_type": "exclusive_write", 00:15:10.047 "zoned": false, 00:15:10.047 "supported_io_types": { 00:15:10.047 "read": true, 00:15:10.047 "write": true, 00:15:10.047 "unmap": true, 00:15:10.047 "flush": true, 00:15:10.047 "reset": true, 00:15:10.047 "nvme_admin": false, 00:15:10.047 "nvme_io": false, 00:15:10.047 "nvme_io_md": false, 00:15:10.047 "write_zeroes": true, 00:15:10.047 "zcopy": true, 00:15:10.047 "get_zone_info": false, 00:15:10.047 "zone_management": false, 00:15:10.047 "zone_append": false, 00:15:10.047 "compare": false, 00:15:10.047 "compare_and_write": false, 00:15:10.047 "abort": true, 00:15:10.047 "seek_hole": false, 00:15:10.047 "seek_data": false, 00:15:10.047 "copy": true, 00:15:10.047 "nvme_iov_md": false 00:15:10.047 }, 00:15:10.047 "memory_domains": [ 00:15:10.047 { 00:15:10.047 "dma_device_id": "system", 00:15:10.047 "dma_device_type": 1 00:15:10.047 }, 00:15:10.047 { 00:15:10.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.047 "dma_device_type": 2 00:15:10.047 } 00:15:10.047 ], 00:15:10.047 "driver_specific": {} 00:15:10.047 } 00:15:10.047 ] 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.047 "name": "Existed_Raid", 00:15:10.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.047 "strip_size_kb": 64, 00:15:10.047 "state": "configuring", 00:15:10.047 "raid_level": "raid5f", 00:15:10.047 "superblock": false, 00:15:10.047 "num_base_bdevs": 4, 00:15:10.047 "num_base_bdevs_discovered": 3, 00:15:10.047 "num_base_bdevs_operational": 4, 00:15:10.047 "base_bdevs_list": [ 00:15:10.047 { 00:15:10.047 "name": "BaseBdev1", 00:15:10.047 "uuid": "6d998027-dbcf-4339-93e6-71d3417e333c", 00:15:10.047 "is_configured": true, 00:15:10.047 "data_offset": 0, 00:15:10.047 "data_size": 65536 00:15:10.047 }, 00:15:10.047 { 00:15:10.047 "name": "BaseBdev2", 00:15:10.047 "uuid": "d72af631-52a3-41b1-96cd-438158822ec6", 00:15:10.047 "is_configured": true, 00:15:10.047 "data_offset": 0, 00:15:10.047 "data_size": 65536 00:15:10.047 }, 00:15:10.047 { 00:15:10.047 "name": "BaseBdev3", 00:15:10.047 "uuid": "816aa3eb-038f-4e39-b51c-a5f158647e96", 00:15:10.047 "is_configured": true, 00:15:10.047 "data_offset": 0, 00:15:10.047 "data_size": 65536 00:15:10.047 }, 00:15:10.047 { 00:15:10.047 "name": "BaseBdev4", 00:15:10.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.047 "is_configured": false, 00:15:10.047 "data_offset": 0, 00:15:10.047 "data_size": 0 00:15:10.047 } 00:15:10.047 ] 00:15:10.047 }' 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.047 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.308 02:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:10.308 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.308 02:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.568 [2024-11-28 02:30:44.003635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:10.568 [2024-11-28 02:30:44.003760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:10.568 [2024-11-28 02:30:44.003786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:10.568 [2024-11-28 02:30:44.004122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:10.568 [2024-11-28 02:30:44.010726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:10.568 [2024-11-28 02:30:44.010786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:10.568 [2024-11-28 02:30:44.011108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.568 BaseBdev4 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.568 [ 00:15:10.568 { 00:15:10.568 "name": "BaseBdev4", 00:15:10.568 "aliases": [ 00:15:10.568 "29c751b1-68f5-4ef6-855b-3997f52799f5" 00:15:10.568 ], 00:15:10.568 "product_name": "Malloc disk", 00:15:10.568 "block_size": 512, 00:15:10.568 "num_blocks": 65536, 00:15:10.568 "uuid": "29c751b1-68f5-4ef6-855b-3997f52799f5", 00:15:10.568 "assigned_rate_limits": { 00:15:10.568 "rw_ios_per_sec": 0, 00:15:10.568 "rw_mbytes_per_sec": 0, 00:15:10.568 "r_mbytes_per_sec": 0, 00:15:10.568 "w_mbytes_per_sec": 0 00:15:10.568 }, 00:15:10.568 "claimed": true, 00:15:10.568 "claim_type": "exclusive_write", 00:15:10.568 "zoned": false, 00:15:10.568 "supported_io_types": { 00:15:10.568 "read": true, 00:15:10.568 "write": true, 00:15:10.568 "unmap": true, 00:15:10.568 "flush": true, 00:15:10.568 "reset": true, 00:15:10.568 "nvme_admin": false, 00:15:10.568 "nvme_io": false, 00:15:10.568 "nvme_io_md": false, 00:15:10.568 "write_zeroes": true, 00:15:10.568 "zcopy": true, 00:15:10.568 "get_zone_info": false, 00:15:10.568 "zone_management": false, 00:15:10.568 "zone_append": false, 00:15:10.568 "compare": false, 00:15:10.568 "compare_and_write": false, 00:15:10.568 "abort": true, 00:15:10.568 "seek_hole": false, 00:15:10.568 "seek_data": false, 00:15:10.568 "copy": true, 00:15:10.568 "nvme_iov_md": false 00:15:10.568 }, 00:15:10.568 "memory_domains": [ 00:15:10.568 { 00:15:10.568 "dma_device_id": "system", 00:15:10.568 "dma_device_type": 1 00:15:10.568 }, 00:15:10.568 { 00:15:10.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.568 "dma_device_type": 2 00:15:10.568 } 00:15:10.568 ], 00:15:10.568 "driver_specific": {} 00:15:10.568 } 00:15:10.568 ] 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:10.568 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.569 "name": "Existed_Raid", 00:15:10.569 "uuid": "1436beab-0a35-46b9-a3ee-645c51c62b33", 00:15:10.569 "strip_size_kb": 64, 00:15:10.569 "state": "online", 00:15:10.569 "raid_level": "raid5f", 00:15:10.569 "superblock": false, 00:15:10.569 "num_base_bdevs": 4, 00:15:10.569 "num_base_bdevs_discovered": 4, 00:15:10.569 "num_base_bdevs_operational": 4, 00:15:10.569 "base_bdevs_list": [ 00:15:10.569 { 00:15:10.569 "name": "BaseBdev1", 00:15:10.569 "uuid": "6d998027-dbcf-4339-93e6-71d3417e333c", 00:15:10.569 "is_configured": true, 00:15:10.569 "data_offset": 0, 00:15:10.569 "data_size": 65536 00:15:10.569 }, 00:15:10.569 { 00:15:10.569 "name": "BaseBdev2", 00:15:10.569 "uuid": "d72af631-52a3-41b1-96cd-438158822ec6", 00:15:10.569 "is_configured": true, 00:15:10.569 "data_offset": 0, 00:15:10.569 "data_size": 65536 00:15:10.569 }, 00:15:10.569 { 00:15:10.569 "name": "BaseBdev3", 00:15:10.569 "uuid": "816aa3eb-038f-4e39-b51c-a5f158647e96", 00:15:10.569 "is_configured": true, 00:15:10.569 "data_offset": 0, 00:15:10.569 "data_size": 65536 00:15:10.569 }, 00:15:10.569 { 00:15:10.569 "name": "BaseBdev4", 00:15:10.569 "uuid": "29c751b1-68f5-4ef6-855b-3997f52799f5", 00:15:10.569 "is_configured": true, 00:15:10.569 "data_offset": 0, 00:15:10.569 "data_size": 65536 00:15:10.569 } 00:15:10.569 ] 00:15:10.569 }' 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.569 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.828 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:10.829 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:10.829 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:10.829 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:10.829 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:10.829 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:10.829 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:10.829 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:10.829 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.829 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.829 [2024-11-28 02:30:44.498448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.088 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.088 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:11.088 "name": "Existed_Raid", 00:15:11.088 "aliases": [ 00:15:11.088 "1436beab-0a35-46b9-a3ee-645c51c62b33" 00:15:11.088 ], 00:15:11.088 "product_name": "Raid Volume", 00:15:11.088 "block_size": 512, 00:15:11.088 "num_blocks": 196608, 00:15:11.088 "uuid": "1436beab-0a35-46b9-a3ee-645c51c62b33", 00:15:11.088 "assigned_rate_limits": { 00:15:11.088 "rw_ios_per_sec": 0, 00:15:11.088 "rw_mbytes_per_sec": 0, 00:15:11.088 "r_mbytes_per_sec": 0, 00:15:11.088 "w_mbytes_per_sec": 0 00:15:11.088 }, 00:15:11.088 "claimed": false, 00:15:11.088 "zoned": false, 00:15:11.088 "supported_io_types": { 00:15:11.088 "read": true, 00:15:11.088 "write": true, 00:15:11.088 "unmap": false, 00:15:11.088 "flush": false, 00:15:11.088 "reset": true, 00:15:11.088 "nvme_admin": false, 00:15:11.088 "nvme_io": false, 00:15:11.088 "nvme_io_md": false, 00:15:11.088 "write_zeroes": true, 00:15:11.088 "zcopy": false, 00:15:11.088 "get_zone_info": false, 00:15:11.088 "zone_management": false, 00:15:11.088 "zone_append": false, 00:15:11.088 "compare": false, 00:15:11.088 "compare_and_write": false, 00:15:11.088 "abort": false, 00:15:11.088 "seek_hole": false, 00:15:11.088 "seek_data": false, 00:15:11.088 "copy": false, 00:15:11.088 "nvme_iov_md": false 00:15:11.088 }, 00:15:11.088 "driver_specific": { 00:15:11.088 "raid": { 00:15:11.088 "uuid": "1436beab-0a35-46b9-a3ee-645c51c62b33", 00:15:11.088 "strip_size_kb": 64, 00:15:11.088 "state": "online", 00:15:11.088 "raid_level": "raid5f", 00:15:11.088 "superblock": false, 00:15:11.088 "num_base_bdevs": 4, 00:15:11.088 "num_base_bdevs_discovered": 4, 00:15:11.088 "num_base_bdevs_operational": 4, 00:15:11.088 "base_bdevs_list": [ 00:15:11.088 { 00:15:11.088 "name": "BaseBdev1", 00:15:11.088 "uuid": "6d998027-dbcf-4339-93e6-71d3417e333c", 00:15:11.088 "is_configured": true, 00:15:11.088 "data_offset": 0, 00:15:11.088 "data_size": 65536 00:15:11.088 }, 00:15:11.088 { 00:15:11.088 "name": "BaseBdev2", 00:15:11.088 "uuid": "d72af631-52a3-41b1-96cd-438158822ec6", 00:15:11.088 "is_configured": true, 00:15:11.088 "data_offset": 0, 00:15:11.088 "data_size": 65536 00:15:11.088 }, 00:15:11.088 { 00:15:11.088 "name": "BaseBdev3", 00:15:11.088 "uuid": "816aa3eb-038f-4e39-b51c-a5f158647e96", 00:15:11.088 "is_configured": true, 00:15:11.088 "data_offset": 0, 00:15:11.088 "data_size": 65536 00:15:11.088 }, 00:15:11.088 { 00:15:11.088 "name": "BaseBdev4", 00:15:11.088 "uuid": "29c751b1-68f5-4ef6-855b-3997f52799f5", 00:15:11.088 "is_configured": true, 00:15:11.089 "data_offset": 0, 00:15:11.089 "data_size": 65536 00:15:11.089 } 00:15:11.089 ] 00:15:11.089 } 00:15:11.089 } 00:15:11.089 }' 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:11.089 BaseBdev2 00:15:11.089 BaseBdev3 00:15:11.089 BaseBdev4' 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.089 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.349 [2024-11-28 02:30:44.821764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.349 "name": "Existed_Raid", 00:15:11.349 "uuid": "1436beab-0a35-46b9-a3ee-645c51c62b33", 00:15:11.349 "strip_size_kb": 64, 00:15:11.349 "state": "online", 00:15:11.349 "raid_level": "raid5f", 00:15:11.349 "superblock": false, 00:15:11.349 "num_base_bdevs": 4, 00:15:11.349 "num_base_bdevs_discovered": 3, 00:15:11.349 "num_base_bdevs_operational": 3, 00:15:11.349 "base_bdevs_list": [ 00:15:11.349 { 00:15:11.349 "name": null, 00:15:11.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.349 "is_configured": false, 00:15:11.349 "data_offset": 0, 00:15:11.349 "data_size": 65536 00:15:11.349 }, 00:15:11.349 { 00:15:11.349 "name": "BaseBdev2", 00:15:11.349 "uuid": "d72af631-52a3-41b1-96cd-438158822ec6", 00:15:11.349 "is_configured": true, 00:15:11.349 "data_offset": 0, 00:15:11.349 "data_size": 65536 00:15:11.349 }, 00:15:11.349 { 00:15:11.349 "name": "BaseBdev3", 00:15:11.349 "uuid": "816aa3eb-038f-4e39-b51c-a5f158647e96", 00:15:11.349 "is_configured": true, 00:15:11.349 "data_offset": 0, 00:15:11.349 "data_size": 65536 00:15:11.349 }, 00:15:11.349 { 00:15:11.349 "name": "BaseBdev4", 00:15:11.349 "uuid": "29c751b1-68f5-4ef6-855b-3997f52799f5", 00:15:11.349 "is_configured": true, 00:15:11.349 "data_offset": 0, 00:15:11.349 "data_size": 65536 00:15:11.349 } 00:15:11.349 ] 00:15:11.349 }' 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.349 02:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.918 [2024-11-28 02:30:45.400857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:11.918 [2024-11-28 02:30:45.400959] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.918 [2024-11-28 02:30:45.488723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.918 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.918 [2024-11-28 02:30:45.532694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:12.178 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.179 [2024-11-28 02:30:45.675952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:12.179 [2024-11-28 02:30:45.676010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.179 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 BaseBdev2 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 [ 00:15:12.441 { 00:15:12.441 "name": "BaseBdev2", 00:15:12.441 "aliases": [ 00:15:12.441 "272ec884-7b2c-4950-b6e3-94d8d3cd78c8" 00:15:12.441 ], 00:15:12.441 "product_name": "Malloc disk", 00:15:12.441 "block_size": 512, 00:15:12.441 "num_blocks": 65536, 00:15:12.441 "uuid": "272ec884-7b2c-4950-b6e3-94d8d3cd78c8", 00:15:12.441 "assigned_rate_limits": { 00:15:12.441 "rw_ios_per_sec": 0, 00:15:12.441 "rw_mbytes_per_sec": 0, 00:15:12.441 "r_mbytes_per_sec": 0, 00:15:12.441 "w_mbytes_per_sec": 0 00:15:12.441 }, 00:15:12.441 "claimed": false, 00:15:12.441 "zoned": false, 00:15:12.441 "supported_io_types": { 00:15:12.441 "read": true, 00:15:12.441 "write": true, 00:15:12.441 "unmap": true, 00:15:12.441 "flush": true, 00:15:12.441 "reset": true, 00:15:12.441 "nvme_admin": false, 00:15:12.441 "nvme_io": false, 00:15:12.441 "nvme_io_md": false, 00:15:12.441 "write_zeroes": true, 00:15:12.441 "zcopy": true, 00:15:12.441 "get_zone_info": false, 00:15:12.441 "zone_management": false, 00:15:12.441 "zone_append": false, 00:15:12.441 "compare": false, 00:15:12.441 "compare_and_write": false, 00:15:12.441 "abort": true, 00:15:12.441 "seek_hole": false, 00:15:12.441 "seek_data": false, 00:15:12.441 "copy": true, 00:15:12.441 "nvme_iov_md": false 00:15:12.441 }, 00:15:12.441 "memory_domains": [ 00:15:12.441 { 00:15:12.441 "dma_device_id": "system", 00:15:12.441 "dma_device_type": 1 00:15:12.441 }, 00:15:12.441 { 00:15:12.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.441 "dma_device_type": 2 00:15:12.441 } 00:15:12.441 ], 00:15:12.441 "driver_specific": {} 00:15:12.441 } 00:15:12.441 ] 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 BaseBdev3 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 [ 00:15:12.441 { 00:15:12.441 "name": "BaseBdev3", 00:15:12.441 "aliases": [ 00:15:12.441 "208ba64f-87a7-43dc-a33a-75625ea3e637" 00:15:12.441 ], 00:15:12.441 "product_name": "Malloc disk", 00:15:12.441 "block_size": 512, 00:15:12.441 "num_blocks": 65536, 00:15:12.441 "uuid": "208ba64f-87a7-43dc-a33a-75625ea3e637", 00:15:12.441 "assigned_rate_limits": { 00:15:12.441 "rw_ios_per_sec": 0, 00:15:12.441 "rw_mbytes_per_sec": 0, 00:15:12.441 "r_mbytes_per_sec": 0, 00:15:12.441 "w_mbytes_per_sec": 0 00:15:12.441 }, 00:15:12.441 "claimed": false, 00:15:12.441 "zoned": false, 00:15:12.441 "supported_io_types": { 00:15:12.441 "read": true, 00:15:12.441 "write": true, 00:15:12.441 "unmap": true, 00:15:12.441 "flush": true, 00:15:12.441 "reset": true, 00:15:12.441 "nvme_admin": false, 00:15:12.441 "nvme_io": false, 00:15:12.441 "nvme_io_md": false, 00:15:12.441 "write_zeroes": true, 00:15:12.441 "zcopy": true, 00:15:12.441 "get_zone_info": false, 00:15:12.441 "zone_management": false, 00:15:12.441 "zone_append": false, 00:15:12.441 "compare": false, 00:15:12.441 "compare_and_write": false, 00:15:12.441 "abort": true, 00:15:12.441 "seek_hole": false, 00:15:12.441 "seek_data": false, 00:15:12.441 "copy": true, 00:15:12.441 "nvme_iov_md": false 00:15:12.441 }, 00:15:12.441 "memory_domains": [ 00:15:12.441 { 00:15:12.441 "dma_device_id": "system", 00:15:12.441 "dma_device_type": 1 00:15:12.441 }, 00:15:12.441 { 00:15:12.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.441 "dma_device_type": 2 00:15:12.441 } 00:15:12.441 ], 00:15:12.441 "driver_specific": {} 00:15:12.441 } 00:15:12.441 ] 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.441 02:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 BaseBdev4 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.441 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 [ 00:15:12.441 { 00:15:12.441 "name": "BaseBdev4", 00:15:12.441 "aliases": [ 00:15:12.441 "c78b548a-d42f-4e1e-b699-0ea36522475f" 00:15:12.441 ], 00:15:12.441 "product_name": "Malloc disk", 00:15:12.441 "block_size": 512, 00:15:12.441 "num_blocks": 65536, 00:15:12.441 "uuid": "c78b548a-d42f-4e1e-b699-0ea36522475f", 00:15:12.441 "assigned_rate_limits": { 00:15:12.441 "rw_ios_per_sec": 0, 00:15:12.441 "rw_mbytes_per_sec": 0, 00:15:12.442 "r_mbytes_per_sec": 0, 00:15:12.442 "w_mbytes_per_sec": 0 00:15:12.442 }, 00:15:12.442 "claimed": false, 00:15:12.442 "zoned": false, 00:15:12.442 "supported_io_types": { 00:15:12.442 "read": true, 00:15:12.442 "write": true, 00:15:12.442 "unmap": true, 00:15:12.442 "flush": true, 00:15:12.442 "reset": true, 00:15:12.442 "nvme_admin": false, 00:15:12.442 "nvme_io": false, 00:15:12.442 "nvme_io_md": false, 00:15:12.442 "write_zeroes": true, 00:15:12.442 "zcopy": true, 00:15:12.442 "get_zone_info": false, 00:15:12.442 "zone_management": false, 00:15:12.442 "zone_append": false, 00:15:12.442 "compare": false, 00:15:12.442 "compare_and_write": false, 00:15:12.442 "abort": true, 00:15:12.442 "seek_hole": false, 00:15:12.442 "seek_data": false, 00:15:12.442 "copy": true, 00:15:12.442 "nvme_iov_md": false 00:15:12.442 }, 00:15:12.442 "memory_domains": [ 00:15:12.442 { 00:15:12.442 "dma_device_id": "system", 00:15:12.442 "dma_device_type": 1 00:15:12.442 }, 00:15:12.442 { 00:15:12.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.442 "dma_device_type": 2 00:15:12.442 } 00:15:12.442 ], 00:15:12.442 "driver_specific": {} 00:15:12.442 } 00:15:12.442 ] 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.442 [2024-11-28 02:30:46.052793] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.442 [2024-11-28 02:30:46.052880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.442 [2024-11-28 02:30:46.052929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.442 [2024-11-28 02:30:46.054623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.442 [2024-11-28 02:30:46.054727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.442 "name": "Existed_Raid", 00:15:12.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.442 "strip_size_kb": 64, 00:15:12.442 "state": "configuring", 00:15:12.442 "raid_level": "raid5f", 00:15:12.442 "superblock": false, 00:15:12.442 "num_base_bdevs": 4, 00:15:12.442 "num_base_bdevs_discovered": 3, 00:15:12.442 "num_base_bdevs_operational": 4, 00:15:12.442 "base_bdevs_list": [ 00:15:12.442 { 00:15:12.442 "name": "BaseBdev1", 00:15:12.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.442 "is_configured": false, 00:15:12.442 "data_offset": 0, 00:15:12.442 "data_size": 0 00:15:12.442 }, 00:15:12.442 { 00:15:12.442 "name": "BaseBdev2", 00:15:12.442 "uuid": "272ec884-7b2c-4950-b6e3-94d8d3cd78c8", 00:15:12.442 "is_configured": true, 00:15:12.442 "data_offset": 0, 00:15:12.442 "data_size": 65536 00:15:12.442 }, 00:15:12.442 { 00:15:12.442 "name": "BaseBdev3", 00:15:12.442 "uuid": "208ba64f-87a7-43dc-a33a-75625ea3e637", 00:15:12.442 "is_configured": true, 00:15:12.442 "data_offset": 0, 00:15:12.442 "data_size": 65536 00:15:12.442 }, 00:15:12.442 { 00:15:12.442 "name": "BaseBdev4", 00:15:12.442 "uuid": "c78b548a-d42f-4e1e-b699-0ea36522475f", 00:15:12.442 "is_configured": true, 00:15:12.442 "data_offset": 0, 00:15:12.442 "data_size": 65536 00:15:12.442 } 00:15:12.442 ] 00:15:12.442 }' 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.442 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.013 [2024-11-28 02:30:46.496024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.013 "name": "Existed_Raid", 00:15:13.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.013 "strip_size_kb": 64, 00:15:13.013 "state": "configuring", 00:15:13.013 "raid_level": "raid5f", 00:15:13.013 "superblock": false, 00:15:13.013 "num_base_bdevs": 4, 00:15:13.013 "num_base_bdevs_discovered": 2, 00:15:13.013 "num_base_bdevs_operational": 4, 00:15:13.013 "base_bdevs_list": [ 00:15:13.013 { 00:15:13.013 "name": "BaseBdev1", 00:15:13.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.013 "is_configured": false, 00:15:13.013 "data_offset": 0, 00:15:13.013 "data_size": 0 00:15:13.013 }, 00:15:13.013 { 00:15:13.013 "name": null, 00:15:13.013 "uuid": "272ec884-7b2c-4950-b6e3-94d8d3cd78c8", 00:15:13.013 "is_configured": false, 00:15:13.013 "data_offset": 0, 00:15:13.013 "data_size": 65536 00:15:13.013 }, 00:15:13.013 { 00:15:13.013 "name": "BaseBdev3", 00:15:13.013 "uuid": "208ba64f-87a7-43dc-a33a-75625ea3e637", 00:15:13.013 "is_configured": true, 00:15:13.013 "data_offset": 0, 00:15:13.013 "data_size": 65536 00:15:13.013 }, 00:15:13.013 { 00:15:13.013 "name": "BaseBdev4", 00:15:13.013 "uuid": "c78b548a-d42f-4e1e-b699-0ea36522475f", 00:15:13.013 "is_configured": true, 00:15:13.013 "data_offset": 0, 00:15:13.013 "data_size": 65536 00:15:13.013 } 00:15:13.013 ] 00:15:13.013 }' 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.013 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.274 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:13.274 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.274 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.274 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.274 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.274 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:13.274 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:13.274 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.274 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.534 [2024-11-28 02:30:46.972999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.534 BaseBdev1 00:15:13.534 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.535 02:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.535 [ 00:15:13.535 { 00:15:13.535 "name": "BaseBdev1", 00:15:13.535 "aliases": [ 00:15:13.535 "ce267d7e-bd7a-491d-ad5b-2c343c419056" 00:15:13.535 ], 00:15:13.535 "product_name": "Malloc disk", 00:15:13.535 "block_size": 512, 00:15:13.535 "num_blocks": 65536, 00:15:13.535 "uuid": "ce267d7e-bd7a-491d-ad5b-2c343c419056", 00:15:13.535 "assigned_rate_limits": { 00:15:13.535 "rw_ios_per_sec": 0, 00:15:13.535 "rw_mbytes_per_sec": 0, 00:15:13.535 "r_mbytes_per_sec": 0, 00:15:13.535 "w_mbytes_per_sec": 0 00:15:13.535 }, 00:15:13.535 "claimed": true, 00:15:13.535 "claim_type": "exclusive_write", 00:15:13.535 "zoned": false, 00:15:13.535 "supported_io_types": { 00:15:13.535 "read": true, 00:15:13.535 "write": true, 00:15:13.535 "unmap": true, 00:15:13.535 "flush": true, 00:15:13.535 "reset": true, 00:15:13.535 "nvme_admin": false, 00:15:13.535 "nvme_io": false, 00:15:13.535 "nvme_io_md": false, 00:15:13.535 "write_zeroes": true, 00:15:13.535 "zcopy": true, 00:15:13.535 "get_zone_info": false, 00:15:13.535 "zone_management": false, 00:15:13.535 "zone_append": false, 00:15:13.535 "compare": false, 00:15:13.535 "compare_and_write": false, 00:15:13.535 "abort": true, 00:15:13.535 "seek_hole": false, 00:15:13.535 "seek_data": false, 00:15:13.535 "copy": true, 00:15:13.535 "nvme_iov_md": false 00:15:13.535 }, 00:15:13.535 "memory_domains": [ 00:15:13.535 { 00:15:13.535 "dma_device_id": "system", 00:15:13.535 "dma_device_type": 1 00:15:13.535 }, 00:15:13.535 { 00:15:13.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.535 "dma_device_type": 2 00:15:13.535 } 00:15:13.535 ], 00:15:13.535 "driver_specific": {} 00:15:13.535 } 00:15:13.535 ] 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.535 "name": "Existed_Raid", 00:15:13.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.535 "strip_size_kb": 64, 00:15:13.535 "state": "configuring", 00:15:13.535 "raid_level": "raid5f", 00:15:13.535 "superblock": false, 00:15:13.535 "num_base_bdevs": 4, 00:15:13.535 "num_base_bdevs_discovered": 3, 00:15:13.535 "num_base_bdevs_operational": 4, 00:15:13.535 "base_bdevs_list": [ 00:15:13.535 { 00:15:13.535 "name": "BaseBdev1", 00:15:13.535 "uuid": "ce267d7e-bd7a-491d-ad5b-2c343c419056", 00:15:13.535 "is_configured": true, 00:15:13.535 "data_offset": 0, 00:15:13.535 "data_size": 65536 00:15:13.535 }, 00:15:13.535 { 00:15:13.535 "name": null, 00:15:13.535 "uuid": "272ec884-7b2c-4950-b6e3-94d8d3cd78c8", 00:15:13.535 "is_configured": false, 00:15:13.535 "data_offset": 0, 00:15:13.535 "data_size": 65536 00:15:13.535 }, 00:15:13.535 { 00:15:13.535 "name": "BaseBdev3", 00:15:13.535 "uuid": "208ba64f-87a7-43dc-a33a-75625ea3e637", 00:15:13.535 "is_configured": true, 00:15:13.535 "data_offset": 0, 00:15:13.535 "data_size": 65536 00:15:13.535 }, 00:15:13.535 { 00:15:13.535 "name": "BaseBdev4", 00:15:13.535 "uuid": "c78b548a-d42f-4e1e-b699-0ea36522475f", 00:15:13.535 "is_configured": true, 00:15:13.535 "data_offset": 0, 00:15:13.535 "data_size": 65536 00:15:13.535 } 00:15:13.535 ] 00:15:13.535 }' 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.535 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.795 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.795 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.795 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.795 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:13.795 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.055 [2024-11-28 02:30:47.492123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.055 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.055 "name": "Existed_Raid", 00:15:14.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.055 "strip_size_kb": 64, 00:15:14.055 "state": "configuring", 00:15:14.055 "raid_level": "raid5f", 00:15:14.055 "superblock": false, 00:15:14.056 "num_base_bdevs": 4, 00:15:14.056 "num_base_bdevs_discovered": 2, 00:15:14.056 "num_base_bdevs_operational": 4, 00:15:14.056 "base_bdevs_list": [ 00:15:14.056 { 00:15:14.056 "name": "BaseBdev1", 00:15:14.056 "uuid": "ce267d7e-bd7a-491d-ad5b-2c343c419056", 00:15:14.056 "is_configured": true, 00:15:14.056 "data_offset": 0, 00:15:14.056 "data_size": 65536 00:15:14.056 }, 00:15:14.056 { 00:15:14.056 "name": null, 00:15:14.056 "uuid": "272ec884-7b2c-4950-b6e3-94d8d3cd78c8", 00:15:14.056 "is_configured": false, 00:15:14.056 "data_offset": 0, 00:15:14.056 "data_size": 65536 00:15:14.056 }, 00:15:14.056 { 00:15:14.056 "name": null, 00:15:14.056 "uuid": "208ba64f-87a7-43dc-a33a-75625ea3e637", 00:15:14.056 "is_configured": false, 00:15:14.056 "data_offset": 0, 00:15:14.056 "data_size": 65536 00:15:14.056 }, 00:15:14.056 { 00:15:14.056 "name": "BaseBdev4", 00:15:14.056 "uuid": "c78b548a-d42f-4e1e-b699-0ea36522475f", 00:15:14.056 "is_configured": true, 00:15:14.056 "data_offset": 0, 00:15:14.056 "data_size": 65536 00:15:14.056 } 00:15:14.056 ] 00:15:14.056 }' 00:15:14.056 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.056 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.316 [2024-11-28 02:30:47.935474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.316 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.576 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.576 "name": "Existed_Raid", 00:15:14.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.576 "strip_size_kb": 64, 00:15:14.576 "state": "configuring", 00:15:14.576 "raid_level": "raid5f", 00:15:14.576 "superblock": false, 00:15:14.576 "num_base_bdevs": 4, 00:15:14.576 "num_base_bdevs_discovered": 3, 00:15:14.576 "num_base_bdevs_operational": 4, 00:15:14.576 "base_bdevs_list": [ 00:15:14.576 { 00:15:14.576 "name": "BaseBdev1", 00:15:14.576 "uuid": "ce267d7e-bd7a-491d-ad5b-2c343c419056", 00:15:14.576 "is_configured": true, 00:15:14.576 "data_offset": 0, 00:15:14.576 "data_size": 65536 00:15:14.576 }, 00:15:14.576 { 00:15:14.576 "name": null, 00:15:14.576 "uuid": "272ec884-7b2c-4950-b6e3-94d8d3cd78c8", 00:15:14.576 "is_configured": false, 00:15:14.576 "data_offset": 0, 00:15:14.576 "data_size": 65536 00:15:14.576 }, 00:15:14.576 { 00:15:14.576 "name": "BaseBdev3", 00:15:14.576 "uuid": "208ba64f-87a7-43dc-a33a-75625ea3e637", 00:15:14.576 "is_configured": true, 00:15:14.576 "data_offset": 0, 00:15:14.576 "data_size": 65536 00:15:14.576 }, 00:15:14.576 { 00:15:14.576 "name": "BaseBdev4", 00:15:14.576 "uuid": "c78b548a-d42f-4e1e-b699-0ea36522475f", 00:15:14.576 "is_configured": true, 00:15:14.576 "data_offset": 0, 00:15:14.576 "data_size": 65536 00:15:14.576 } 00:15:14.576 ] 00:15:14.576 }' 00:15:14.576 02:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.576 02:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.836 [2024-11-28 02:30:48.414669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.836 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.096 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.096 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.096 "name": "Existed_Raid", 00:15:15.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.096 "strip_size_kb": 64, 00:15:15.096 "state": "configuring", 00:15:15.096 "raid_level": "raid5f", 00:15:15.096 "superblock": false, 00:15:15.096 "num_base_bdevs": 4, 00:15:15.096 "num_base_bdevs_discovered": 2, 00:15:15.096 "num_base_bdevs_operational": 4, 00:15:15.096 "base_bdevs_list": [ 00:15:15.096 { 00:15:15.096 "name": null, 00:15:15.096 "uuid": "ce267d7e-bd7a-491d-ad5b-2c343c419056", 00:15:15.096 "is_configured": false, 00:15:15.096 "data_offset": 0, 00:15:15.096 "data_size": 65536 00:15:15.096 }, 00:15:15.096 { 00:15:15.096 "name": null, 00:15:15.096 "uuid": "272ec884-7b2c-4950-b6e3-94d8d3cd78c8", 00:15:15.096 "is_configured": false, 00:15:15.096 "data_offset": 0, 00:15:15.096 "data_size": 65536 00:15:15.096 }, 00:15:15.096 { 00:15:15.096 "name": "BaseBdev3", 00:15:15.096 "uuid": "208ba64f-87a7-43dc-a33a-75625ea3e637", 00:15:15.096 "is_configured": true, 00:15:15.096 "data_offset": 0, 00:15:15.096 "data_size": 65536 00:15:15.096 }, 00:15:15.096 { 00:15:15.096 "name": "BaseBdev4", 00:15:15.096 "uuid": "c78b548a-d42f-4e1e-b699-0ea36522475f", 00:15:15.096 "is_configured": true, 00:15:15.096 "data_offset": 0, 00:15:15.096 "data_size": 65536 00:15:15.096 } 00:15:15.096 ] 00:15:15.096 }' 00:15:15.096 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.096 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.356 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.356 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:15.356 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.356 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.356 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.356 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:15.356 02:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:15.356 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.356 02:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.356 [2024-11-28 02:30:49.007348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.356 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.616 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.616 "name": "Existed_Raid", 00:15:15.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.616 "strip_size_kb": 64, 00:15:15.616 "state": "configuring", 00:15:15.616 "raid_level": "raid5f", 00:15:15.616 "superblock": false, 00:15:15.616 "num_base_bdevs": 4, 00:15:15.616 "num_base_bdevs_discovered": 3, 00:15:15.616 "num_base_bdevs_operational": 4, 00:15:15.616 "base_bdevs_list": [ 00:15:15.616 { 00:15:15.616 "name": null, 00:15:15.616 "uuid": "ce267d7e-bd7a-491d-ad5b-2c343c419056", 00:15:15.616 "is_configured": false, 00:15:15.616 "data_offset": 0, 00:15:15.616 "data_size": 65536 00:15:15.616 }, 00:15:15.616 { 00:15:15.616 "name": "BaseBdev2", 00:15:15.616 "uuid": "272ec884-7b2c-4950-b6e3-94d8d3cd78c8", 00:15:15.616 "is_configured": true, 00:15:15.616 "data_offset": 0, 00:15:15.616 "data_size": 65536 00:15:15.616 }, 00:15:15.616 { 00:15:15.616 "name": "BaseBdev3", 00:15:15.616 "uuid": "208ba64f-87a7-43dc-a33a-75625ea3e637", 00:15:15.616 "is_configured": true, 00:15:15.616 "data_offset": 0, 00:15:15.616 "data_size": 65536 00:15:15.616 }, 00:15:15.616 { 00:15:15.616 "name": "BaseBdev4", 00:15:15.616 "uuid": "c78b548a-d42f-4e1e-b699-0ea36522475f", 00:15:15.616 "is_configured": true, 00:15:15.616 "data_offset": 0, 00:15:15.616 "data_size": 65536 00:15:15.616 } 00:15:15.616 ] 00:15:15.616 }' 00:15:15.616 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.616 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ce267d7e-bd7a-491d-ad5b-2c343c419056 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.876 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 [2024-11-28 02:30:49.581515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:16.137 [2024-11-28 02:30:49.581624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:16.137 [2024-11-28 02:30:49.581647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:16.137 [2024-11-28 02:30:49.581944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:16.137 [2024-11-28 02:30:49.588510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:16.137 [2024-11-28 02:30:49.588570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:16.137 [2024-11-28 02:30:49.588859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.137 NewBaseBdev 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 [ 00:15:16.137 { 00:15:16.137 "name": "NewBaseBdev", 00:15:16.137 "aliases": [ 00:15:16.137 "ce267d7e-bd7a-491d-ad5b-2c343c419056" 00:15:16.137 ], 00:15:16.137 "product_name": "Malloc disk", 00:15:16.137 "block_size": 512, 00:15:16.137 "num_blocks": 65536, 00:15:16.137 "uuid": "ce267d7e-bd7a-491d-ad5b-2c343c419056", 00:15:16.137 "assigned_rate_limits": { 00:15:16.137 "rw_ios_per_sec": 0, 00:15:16.137 "rw_mbytes_per_sec": 0, 00:15:16.137 "r_mbytes_per_sec": 0, 00:15:16.137 "w_mbytes_per_sec": 0 00:15:16.137 }, 00:15:16.137 "claimed": true, 00:15:16.137 "claim_type": "exclusive_write", 00:15:16.137 "zoned": false, 00:15:16.137 "supported_io_types": { 00:15:16.137 "read": true, 00:15:16.137 "write": true, 00:15:16.137 "unmap": true, 00:15:16.137 "flush": true, 00:15:16.137 "reset": true, 00:15:16.137 "nvme_admin": false, 00:15:16.137 "nvme_io": false, 00:15:16.137 "nvme_io_md": false, 00:15:16.137 "write_zeroes": true, 00:15:16.137 "zcopy": true, 00:15:16.137 "get_zone_info": false, 00:15:16.137 "zone_management": false, 00:15:16.137 "zone_append": false, 00:15:16.137 "compare": false, 00:15:16.137 "compare_and_write": false, 00:15:16.137 "abort": true, 00:15:16.137 "seek_hole": false, 00:15:16.137 "seek_data": false, 00:15:16.137 "copy": true, 00:15:16.137 "nvme_iov_md": false 00:15:16.137 }, 00:15:16.137 "memory_domains": [ 00:15:16.137 { 00:15:16.137 "dma_device_id": "system", 00:15:16.137 "dma_device_type": 1 00:15:16.137 }, 00:15:16.137 { 00:15:16.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.137 "dma_device_type": 2 00:15:16.137 } 00:15:16.137 ], 00:15:16.137 "driver_specific": {} 00:15:16.137 } 00:15:16.137 ] 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.137 "name": "Existed_Raid", 00:15:16.137 "uuid": "7f9c249b-e82e-47c0-aacb-c4e6f4d4a7d9", 00:15:16.137 "strip_size_kb": 64, 00:15:16.137 "state": "online", 00:15:16.137 "raid_level": "raid5f", 00:15:16.137 "superblock": false, 00:15:16.137 "num_base_bdevs": 4, 00:15:16.137 "num_base_bdevs_discovered": 4, 00:15:16.137 "num_base_bdevs_operational": 4, 00:15:16.137 "base_bdevs_list": [ 00:15:16.137 { 00:15:16.137 "name": "NewBaseBdev", 00:15:16.137 "uuid": "ce267d7e-bd7a-491d-ad5b-2c343c419056", 00:15:16.137 "is_configured": true, 00:15:16.137 "data_offset": 0, 00:15:16.137 "data_size": 65536 00:15:16.137 }, 00:15:16.137 { 00:15:16.137 "name": "BaseBdev2", 00:15:16.137 "uuid": "272ec884-7b2c-4950-b6e3-94d8d3cd78c8", 00:15:16.137 "is_configured": true, 00:15:16.137 "data_offset": 0, 00:15:16.137 "data_size": 65536 00:15:16.137 }, 00:15:16.137 { 00:15:16.137 "name": "BaseBdev3", 00:15:16.137 "uuid": "208ba64f-87a7-43dc-a33a-75625ea3e637", 00:15:16.137 "is_configured": true, 00:15:16.137 "data_offset": 0, 00:15:16.137 "data_size": 65536 00:15:16.137 }, 00:15:16.137 { 00:15:16.137 "name": "BaseBdev4", 00:15:16.137 "uuid": "c78b548a-d42f-4e1e-b699-0ea36522475f", 00:15:16.137 "is_configured": true, 00:15:16.137 "data_offset": 0, 00:15:16.137 "data_size": 65536 00:15:16.137 } 00:15:16.137 ] 00:15:16.137 }' 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.137 02:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.397 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.397 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:16.397 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:16.397 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:16.398 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:16.398 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:16.398 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:16.398 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.398 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.398 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:16.398 [2024-11-28 02:30:50.056636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.658 "name": "Existed_Raid", 00:15:16.658 "aliases": [ 00:15:16.658 "7f9c249b-e82e-47c0-aacb-c4e6f4d4a7d9" 00:15:16.658 ], 00:15:16.658 "product_name": "Raid Volume", 00:15:16.658 "block_size": 512, 00:15:16.658 "num_blocks": 196608, 00:15:16.658 "uuid": "7f9c249b-e82e-47c0-aacb-c4e6f4d4a7d9", 00:15:16.658 "assigned_rate_limits": { 00:15:16.658 "rw_ios_per_sec": 0, 00:15:16.658 "rw_mbytes_per_sec": 0, 00:15:16.658 "r_mbytes_per_sec": 0, 00:15:16.658 "w_mbytes_per_sec": 0 00:15:16.658 }, 00:15:16.658 "claimed": false, 00:15:16.658 "zoned": false, 00:15:16.658 "supported_io_types": { 00:15:16.658 "read": true, 00:15:16.658 "write": true, 00:15:16.658 "unmap": false, 00:15:16.658 "flush": false, 00:15:16.658 "reset": true, 00:15:16.658 "nvme_admin": false, 00:15:16.658 "nvme_io": false, 00:15:16.658 "nvme_io_md": false, 00:15:16.658 "write_zeroes": true, 00:15:16.658 "zcopy": false, 00:15:16.658 "get_zone_info": false, 00:15:16.658 "zone_management": false, 00:15:16.658 "zone_append": false, 00:15:16.658 "compare": false, 00:15:16.658 "compare_and_write": false, 00:15:16.658 "abort": false, 00:15:16.658 "seek_hole": false, 00:15:16.658 "seek_data": false, 00:15:16.658 "copy": false, 00:15:16.658 "nvme_iov_md": false 00:15:16.658 }, 00:15:16.658 "driver_specific": { 00:15:16.658 "raid": { 00:15:16.658 "uuid": "7f9c249b-e82e-47c0-aacb-c4e6f4d4a7d9", 00:15:16.658 "strip_size_kb": 64, 00:15:16.658 "state": "online", 00:15:16.658 "raid_level": "raid5f", 00:15:16.658 "superblock": false, 00:15:16.658 "num_base_bdevs": 4, 00:15:16.658 "num_base_bdevs_discovered": 4, 00:15:16.658 "num_base_bdevs_operational": 4, 00:15:16.658 "base_bdevs_list": [ 00:15:16.658 { 00:15:16.658 "name": "NewBaseBdev", 00:15:16.658 "uuid": "ce267d7e-bd7a-491d-ad5b-2c343c419056", 00:15:16.658 "is_configured": true, 00:15:16.658 "data_offset": 0, 00:15:16.658 "data_size": 65536 00:15:16.658 }, 00:15:16.658 { 00:15:16.658 "name": "BaseBdev2", 00:15:16.658 "uuid": "272ec884-7b2c-4950-b6e3-94d8d3cd78c8", 00:15:16.658 "is_configured": true, 00:15:16.658 "data_offset": 0, 00:15:16.658 "data_size": 65536 00:15:16.658 }, 00:15:16.658 { 00:15:16.658 "name": "BaseBdev3", 00:15:16.658 "uuid": "208ba64f-87a7-43dc-a33a-75625ea3e637", 00:15:16.658 "is_configured": true, 00:15:16.658 "data_offset": 0, 00:15:16.658 "data_size": 65536 00:15:16.658 }, 00:15:16.658 { 00:15:16.658 "name": "BaseBdev4", 00:15:16.658 "uuid": "c78b548a-d42f-4e1e-b699-0ea36522475f", 00:15:16.658 "is_configured": true, 00:15:16.658 "data_offset": 0, 00:15:16.658 "data_size": 65536 00:15:16.658 } 00:15:16.658 ] 00:15:16.658 } 00:15:16.658 } 00:15:16.658 }' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:16.658 BaseBdev2 00:15:16.658 BaseBdev3 00:15:16.658 BaseBdev4' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.658 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.934 [2024-11-28 02:30:50.363945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.934 [2024-11-28 02:30:50.363971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.934 [2024-11-28 02:30:50.364041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.934 [2024-11-28 02:30:50.364317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.934 [2024-11-28 02:30:50.364328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82523 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82523 ']' 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82523 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82523 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82523' 00:15:16.934 killing process with pid 82523 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82523 00:15:16.934 [2024-11-28 02:30:50.394899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.934 02:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82523 00:15:17.200 [2024-11-28 02:30:50.757381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:18.581 00:15:18.581 real 0m11.168s 00:15:18.581 user 0m17.834s 00:15:18.581 sys 0m1.970s 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.581 ************************************ 00:15:18.581 END TEST raid5f_state_function_test 00:15:18.581 ************************************ 00:15:18.581 02:30:51 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:18.581 02:30:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:18.581 02:30:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.581 02:30:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.581 ************************************ 00:15:18.581 START TEST raid5f_state_function_test_sb 00:15:18.581 ************************************ 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.581 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83189 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:18.582 Process raid pid: 83189 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83189' 00:15:18.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83189 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83189 ']' 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.582 02:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.582 [2024-11-28 02:30:51.981526] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:18.582 [2024-11-28 02:30:51.981653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.582 [2024-11-28 02:30:52.154219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.842 [2024-11-28 02:30:52.263902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.842 [2024-11-28 02:30:52.463867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.842 [2024-11-28 02:30:52.463901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.412 [2024-11-28 02:30:52.797374] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.412 [2024-11-28 02:30:52.797426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.412 [2024-11-28 02:30:52.797436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.412 [2024-11-28 02:30:52.797445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.412 [2024-11-28 02:30:52.797451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.412 [2024-11-28 02:30:52.797459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.412 [2024-11-28 02:30:52.797465] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:19.412 [2024-11-28 02:30:52.797473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.412 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.412 "name": "Existed_Raid", 00:15:19.412 "uuid": "7c791a08-bc22-4f20-9fef-87260a7e4522", 00:15:19.412 "strip_size_kb": 64, 00:15:19.412 "state": "configuring", 00:15:19.412 "raid_level": "raid5f", 00:15:19.412 "superblock": true, 00:15:19.412 "num_base_bdevs": 4, 00:15:19.412 "num_base_bdevs_discovered": 0, 00:15:19.412 "num_base_bdevs_operational": 4, 00:15:19.412 "base_bdevs_list": [ 00:15:19.412 { 00:15:19.412 "name": "BaseBdev1", 00:15:19.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.413 "is_configured": false, 00:15:19.413 "data_offset": 0, 00:15:19.413 "data_size": 0 00:15:19.413 }, 00:15:19.413 { 00:15:19.413 "name": "BaseBdev2", 00:15:19.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.413 "is_configured": false, 00:15:19.413 "data_offset": 0, 00:15:19.413 "data_size": 0 00:15:19.413 }, 00:15:19.413 { 00:15:19.413 "name": "BaseBdev3", 00:15:19.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.413 "is_configured": false, 00:15:19.413 "data_offset": 0, 00:15:19.413 "data_size": 0 00:15:19.413 }, 00:15:19.413 { 00:15:19.413 "name": "BaseBdev4", 00:15:19.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.413 "is_configured": false, 00:15:19.413 "data_offset": 0, 00:15:19.413 "data_size": 0 00:15:19.413 } 00:15:19.413 ] 00:15:19.413 }' 00:15:19.413 02:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.413 02:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 [2024-11-28 02:30:53.252525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.673 [2024-11-28 02:30:53.252560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 [2024-11-28 02:30:53.264523] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.673 [2024-11-28 02:30:53.264623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.673 [2024-11-28 02:30:53.264636] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.673 [2024-11-28 02:30:53.264645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.673 [2024-11-28 02:30:53.264651] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.673 [2024-11-28 02:30:53.264660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.673 [2024-11-28 02:30:53.264665] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:19.673 [2024-11-28 02:30:53.264674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 [2024-11-28 02:30:53.310744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.673 BaseBdev1 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.673 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 [ 00:15:19.673 { 00:15:19.673 "name": "BaseBdev1", 00:15:19.673 "aliases": [ 00:15:19.673 "f931d608-35c8-4859-a0d1-1cb7e6a2e545" 00:15:19.673 ], 00:15:19.673 "product_name": "Malloc disk", 00:15:19.673 "block_size": 512, 00:15:19.673 "num_blocks": 65536, 00:15:19.673 "uuid": "f931d608-35c8-4859-a0d1-1cb7e6a2e545", 00:15:19.673 "assigned_rate_limits": { 00:15:19.673 "rw_ios_per_sec": 0, 00:15:19.673 "rw_mbytes_per_sec": 0, 00:15:19.673 "r_mbytes_per_sec": 0, 00:15:19.673 "w_mbytes_per_sec": 0 00:15:19.673 }, 00:15:19.673 "claimed": true, 00:15:19.674 "claim_type": "exclusive_write", 00:15:19.674 "zoned": false, 00:15:19.674 "supported_io_types": { 00:15:19.674 "read": true, 00:15:19.674 "write": true, 00:15:19.674 "unmap": true, 00:15:19.674 "flush": true, 00:15:19.674 "reset": true, 00:15:19.674 "nvme_admin": false, 00:15:19.674 "nvme_io": false, 00:15:19.674 "nvme_io_md": false, 00:15:19.674 "write_zeroes": true, 00:15:19.674 "zcopy": true, 00:15:19.674 "get_zone_info": false, 00:15:19.674 "zone_management": false, 00:15:19.674 "zone_append": false, 00:15:19.674 "compare": false, 00:15:19.674 "compare_and_write": false, 00:15:19.674 "abort": true, 00:15:19.674 "seek_hole": false, 00:15:19.674 "seek_data": false, 00:15:19.674 "copy": true, 00:15:19.674 "nvme_iov_md": false 00:15:19.674 }, 00:15:19.674 "memory_domains": [ 00:15:19.674 { 00:15:19.674 "dma_device_id": "system", 00:15:19.674 "dma_device_type": 1 00:15:19.674 }, 00:15:19.674 { 00:15:19.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.674 "dma_device_type": 2 00:15:19.674 } 00:15:19.934 ], 00:15:19.934 "driver_specific": {} 00:15:19.934 } 00:15:19.934 ] 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.934 "name": "Existed_Raid", 00:15:19.934 "uuid": "f72a3a87-4c37-4876-8e8b-0a0f5daa46ba", 00:15:19.934 "strip_size_kb": 64, 00:15:19.934 "state": "configuring", 00:15:19.934 "raid_level": "raid5f", 00:15:19.934 "superblock": true, 00:15:19.934 "num_base_bdevs": 4, 00:15:19.934 "num_base_bdevs_discovered": 1, 00:15:19.934 "num_base_bdevs_operational": 4, 00:15:19.934 "base_bdevs_list": [ 00:15:19.934 { 00:15:19.934 "name": "BaseBdev1", 00:15:19.934 "uuid": "f931d608-35c8-4859-a0d1-1cb7e6a2e545", 00:15:19.934 "is_configured": true, 00:15:19.934 "data_offset": 2048, 00:15:19.934 "data_size": 63488 00:15:19.934 }, 00:15:19.934 { 00:15:19.934 "name": "BaseBdev2", 00:15:19.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.934 "is_configured": false, 00:15:19.934 "data_offset": 0, 00:15:19.934 "data_size": 0 00:15:19.934 }, 00:15:19.934 { 00:15:19.934 "name": "BaseBdev3", 00:15:19.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.934 "is_configured": false, 00:15:19.934 "data_offset": 0, 00:15:19.934 "data_size": 0 00:15:19.934 }, 00:15:19.934 { 00:15:19.934 "name": "BaseBdev4", 00:15:19.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.934 "is_configured": false, 00:15:19.934 "data_offset": 0, 00:15:19.934 "data_size": 0 00:15:19.934 } 00:15:19.934 ] 00:15:19.934 }' 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.934 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.194 [2024-11-28 02:30:53.766016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:20.194 [2024-11-28 02:30:53.766103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.194 [2024-11-28 02:30:53.778055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.194 [2024-11-28 02:30:53.779835] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.194 [2024-11-28 02:30:53.779929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.194 [2024-11-28 02:30:53.779967] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:20.194 [2024-11-28 02:30:53.779999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:20.194 [2024-11-28 02:30:53.780018] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:20.194 [2024-11-28 02:30:53.780038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.194 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.194 "name": "Existed_Raid", 00:15:20.194 "uuid": "31edbabc-0a64-414e-a0c7-6ac5c9de0152", 00:15:20.194 "strip_size_kb": 64, 00:15:20.194 "state": "configuring", 00:15:20.194 "raid_level": "raid5f", 00:15:20.194 "superblock": true, 00:15:20.194 "num_base_bdevs": 4, 00:15:20.194 "num_base_bdevs_discovered": 1, 00:15:20.194 "num_base_bdevs_operational": 4, 00:15:20.194 "base_bdevs_list": [ 00:15:20.194 { 00:15:20.194 "name": "BaseBdev1", 00:15:20.194 "uuid": "f931d608-35c8-4859-a0d1-1cb7e6a2e545", 00:15:20.194 "is_configured": true, 00:15:20.194 "data_offset": 2048, 00:15:20.194 "data_size": 63488 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "name": "BaseBdev2", 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "is_configured": false, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 0 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "name": "BaseBdev3", 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "is_configured": false, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 0 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "name": "BaseBdev4", 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "is_configured": false, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 0 00:15:20.194 } 00:15:20.194 ] 00:15:20.195 }' 00:15:20.195 02:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.195 02:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.765 [2024-11-28 02:30:54.258044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.765 BaseBdev2 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.765 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.766 [ 00:15:20.766 { 00:15:20.766 "name": "BaseBdev2", 00:15:20.766 "aliases": [ 00:15:20.766 "1a46f172-fc09-4eae-9a23-e4abee76b43a" 00:15:20.766 ], 00:15:20.766 "product_name": "Malloc disk", 00:15:20.766 "block_size": 512, 00:15:20.766 "num_blocks": 65536, 00:15:20.766 "uuid": "1a46f172-fc09-4eae-9a23-e4abee76b43a", 00:15:20.766 "assigned_rate_limits": { 00:15:20.766 "rw_ios_per_sec": 0, 00:15:20.766 "rw_mbytes_per_sec": 0, 00:15:20.766 "r_mbytes_per_sec": 0, 00:15:20.766 "w_mbytes_per_sec": 0 00:15:20.766 }, 00:15:20.766 "claimed": true, 00:15:20.766 "claim_type": "exclusive_write", 00:15:20.766 "zoned": false, 00:15:20.766 "supported_io_types": { 00:15:20.766 "read": true, 00:15:20.766 "write": true, 00:15:20.766 "unmap": true, 00:15:20.766 "flush": true, 00:15:20.766 "reset": true, 00:15:20.766 "nvme_admin": false, 00:15:20.766 "nvme_io": false, 00:15:20.766 "nvme_io_md": false, 00:15:20.766 "write_zeroes": true, 00:15:20.766 "zcopy": true, 00:15:20.766 "get_zone_info": false, 00:15:20.766 "zone_management": false, 00:15:20.766 "zone_append": false, 00:15:20.766 "compare": false, 00:15:20.766 "compare_and_write": false, 00:15:20.766 "abort": true, 00:15:20.766 "seek_hole": false, 00:15:20.766 "seek_data": false, 00:15:20.766 "copy": true, 00:15:20.766 "nvme_iov_md": false 00:15:20.766 }, 00:15:20.766 "memory_domains": [ 00:15:20.766 { 00:15:20.766 "dma_device_id": "system", 00:15:20.766 "dma_device_type": 1 00:15:20.766 }, 00:15:20.766 { 00:15:20.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.766 "dma_device_type": 2 00:15:20.766 } 00:15:20.766 ], 00:15:20.766 "driver_specific": {} 00:15:20.766 } 00:15:20.766 ] 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.766 "name": "Existed_Raid", 00:15:20.766 "uuid": "31edbabc-0a64-414e-a0c7-6ac5c9de0152", 00:15:20.766 "strip_size_kb": 64, 00:15:20.766 "state": "configuring", 00:15:20.766 "raid_level": "raid5f", 00:15:20.766 "superblock": true, 00:15:20.766 "num_base_bdevs": 4, 00:15:20.766 "num_base_bdevs_discovered": 2, 00:15:20.766 "num_base_bdevs_operational": 4, 00:15:20.766 "base_bdevs_list": [ 00:15:20.766 { 00:15:20.766 "name": "BaseBdev1", 00:15:20.766 "uuid": "f931d608-35c8-4859-a0d1-1cb7e6a2e545", 00:15:20.766 "is_configured": true, 00:15:20.766 "data_offset": 2048, 00:15:20.766 "data_size": 63488 00:15:20.766 }, 00:15:20.766 { 00:15:20.766 "name": "BaseBdev2", 00:15:20.766 "uuid": "1a46f172-fc09-4eae-9a23-e4abee76b43a", 00:15:20.766 "is_configured": true, 00:15:20.766 "data_offset": 2048, 00:15:20.766 "data_size": 63488 00:15:20.766 }, 00:15:20.766 { 00:15:20.766 "name": "BaseBdev3", 00:15:20.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.766 "is_configured": false, 00:15:20.766 "data_offset": 0, 00:15:20.766 "data_size": 0 00:15:20.766 }, 00:15:20.766 { 00:15:20.766 "name": "BaseBdev4", 00:15:20.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.766 "is_configured": false, 00:15:20.766 "data_offset": 0, 00:15:20.766 "data_size": 0 00:15:20.766 } 00:15:20.766 ] 00:15:20.766 }' 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.766 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.026 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:21.026 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.026 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.286 [2024-11-28 02:30:54.743751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.286 BaseBdev3 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.286 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.286 [ 00:15:21.286 { 00:15:21.286 "name": "BaseBdev3", 00:15:21.286 "aliases": [ 00:15:21.286 "a795d40c-6890-4775-a95b-5e9adaa33eaf" 00:15:21.286 ], 00:15:21.286 "product_name": "Malloc disk", 00:15:21.286 "block_size": 512, 00:15:21.286 "num_blocks": 65536, 00:15:21.286 "uuid": "a795d40c-6890-4775-a95b-5e9adaa33eaf", 00:15:21.287 "assigned_rate_limits": { 00:15:21.287 "rw_ios_per_sec": 0, 00:15:21.287 "rw_mbytes_per_sec": 0, 00:15:21.287 "r_mbytes_per_sec": 0, 00:15:21.287 "w_mbytes_per_sec": 0 00:15:21.287 }, 00:15:21.287 "claimed": true, 00:15:21.287 "claim_type": "exclusive_write", 00:15:21.287 "zoned": false, 00:15:21.287 "supported_io_types": { 00:15:21.287 "read": true, 00:15:21.287 "write": true, 00:15:21.287 "unmap": true, 00:15:21.287 "flush": true, 00:15:21.287 "reset": true, 00:15:21.287 "nvme_admin": false, 00:15:21.287 "nvme_io": false, 00:15:21.287 "nvme_io_md": false, 00:15:21.287 "write_zeroes": true, 00:15:21.287 "zcopy": true, 00:15:21.287 "get_zone_info": false, 00:15:21.287 "zone_management": false, 00:15:21.287 "zone_append": false, 00:15:21.287 "compare": false, 00:15:21.287 "compare_and_write": false, 00:15:21.287 "abort": true, 00:15:21.287 "seek_hole": false, 00:15:21.287 "seek_data": false, 00:15:21.287 "copy": true, 00:15:21.287 "nvme_iov_md": false 00:15:21.287 }, 00:15:21.287 "memory_domains": [ 00:15:21.287 { 00:15:21.287 "dma_device_id": "system", 00:15:21.287 "dma_device_type": 1 00:15:21.287 }, 00:15:21.287 { 00:15:21.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.287 "dma_device_type": 2 00:15:21.287 } 00:15:21.287 ], 00:15:21.287 "driver_specific": {} 00:15:21.287 } 00:15:21.287 ] 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.287 "name": "Existed_Raid", 00:15:21.287 "uuid": "31edbabc-0a64-414e-a0c7-6ac5c9de0152", 00:15:21.287 "strip_size_kb": 64, 00:15:21.287 "state": "configuring", 00:15:21.287 "raid_level": "raid5f", 00:15:21.287 "superblock": true, 00:15:21.287 "num_base_bdevs": 4, 00:15:21.287 "num_base_bdevs_discovered": 3, 00:15:21.287 "num_base_bdevs_operational": 4, 00:15:21.287 "base_bdevs_list": [ 00:15:21.287 { 00:15:21.287 "name": "BaseBdev1", 00:15:21.287 "uuid": "f931d608-35c8-4859-a0d1-1cb7e6a2e545", 00:15:21.287 "is_configured": true, 00:15:21.287 "data_offset": 2048, 00:15:21.287 "data_size": 63488 00:15:21.287 }, 00:15:21.287 { 00:15:21.287 "name": "BaseBdev2", 00:15:21.287 "uuid": "1a46f172-fc09-4eae-9a23-e4abee76b43a", 00:15:21.287 "is_configured": true, 00:15:21.287 "data_offset": 2048, 00:15:21.287 "data_size": 63488 00:15:21.287 }, 00:15:21.287 { 00:15:21.287 "name": "BaseBdev3", 00:15:21.287 "uuid": "a795d40c-6890-4775-a95b-5e9adaa33eaf", 00:15:21.287 "is_configured": true, 00:15:21.287 "data_offset": 2048, 00:15:21.287 "data_size": 63488 00:15:21.287 }, 00:15:21.287 { 00:15:21.287 "name": "BaseBdev4", 00:15:21.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.287 "is_configured": false, 00:15:21.287 "data_offset": 0, 00:15:21.287 "data_size": 0 00:15:21.287 } 00:15:21.287 ] 00:15:21.287 }' 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.287 02:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.547 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:21.547 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.547 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.806 [2024-11-28 02:30:55.242914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:21.807 [2024-11-28 02:30:55.243186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:21.807 [2024-11-28 02:30:55.243200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:21.807 [2024-11-28 02:30:55.243479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:21.807 BaseBdev4 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.807 [2024-11-28 02:30:55.250420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:21.807 [2024-11-28 02:30:55.250485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:21.807 [2024-11-28 02:30:55.250779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.807 [ 00:15:21.807 { 00:15:21.807 "name": "BaseBdev4", 00:15:21.807 "aliases": [ 00:15:21.807 "1f0c7535-1408-4cfb-a7da-615f69dcd877" 00:15:21.807 ], 00:15:21.807 "product_name": "Malloc disk", 00:15:21.807 "block_size": 512, 00:15:21.807 "num_blocks": 65536, 00:15:21.807 "uuid": "1f0c7535-1408-4cfb-a7da-615f69dcd877", 00:15:21.807 "assigned_rate_limits": { 00:15:21.807 "rw_ios_per_sec": 0, 00:15:21.807 "rw_mbytes_per_sec": 0, 00:15:21.807 "r_mbytes_per_sec": 0, 00:15:21.807 "w_mbytes_per_sec": 0 00:15:21.807 }, 00:15:21.807 "claimed": true, 00:15:21.807 "claim_type": "exclusive_write", 00:15:21.807 "zoned": false, 00:15:21.807 "supported_io_types": { 00:15:21.807 "read": true, 00:15:21.807 "write": true, 00:15:21.807 "unmap": true, 00:15:21.807 "flush": true, 00:15:21.807 "reset": true, 00:15:21.807 "nvme_admin": false, 00:15:21.807 "nvme_io": false, 00:15:21.807 "nvme_io_md": false, 00:15:21.807 "write_zeroes": true, 00:15:21.807 "zcopy": true, 00:15:21.807 "get_zone_info": false, 00:15:21.807 "zone_management": false, 00:15:21.807 "zone_append": false, 00:15:21.807 "compare": false, 00:15:21.807 "compare_and_write": false, 00:15:21.807 "abort": true, 00:15:21.807 "seek_hole": false, 00:15:21.807 "seek_data": false, 00:15:21.807 "copy": true, 00:15:21.807 "nvme_iov_md": false 00:15:21.807 }, 00:15:21.807 "memory_domains": [ 00:15:21.807 { 00:15:21.807 "dma_device_id": "system", 00:15:21.807 "dma_device_type": 1 00:15:21.807 }, 00:15:21.807 { 00:15:21.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.807 "dma_device_type": 2 00:15:21.807 } 00:15:21.807 ], 00:15:21.807 "driver_specific": {} 00:15:21.807 } 00:15:21.807 ] 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.807 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.807 "name": "Existed_Raid", 00:15:21.807 "uuid": "31edbabc-0a64-414e-a0c7-6ac5c9de0152", 00:15:21.807 "strip_size_kb": 64, 00:15:21.807 "state": "online", 00:15:21.807 "raid_level": "raid5f", 00:15:21.807 "superblock": true, 00:15:21.807 "num_base_bdevs": 4, 00:15:21.807 "num_base_bdevs_discovered": 4, 00:15:21.807 "num_base_bdevs_operational": 4, 00:15:21.807 "base_bdevs_list": [ 00:15:21.807 { 00:15:21.807 "name": "BaseBdev1", 00:15:21.807 "uuid": "f931d608-35c8-4859-a0d1-1cb7e6a2e545", 00:15:21.807 "is_configured": true, 00:15:21.807 "data_offset": 2048, 00:15:21.807 "data_size": 63488 00:15:21.807 }, 00:15:21.807 { 00:15:21.807 "name": "BaseBdev2", 00:15:21.807 "uuid": "1a46f172-fc09-4eae-9a23-e4abee76b43a", 00:15:21.807 "is_configured": true, 00:15:21.807 "data_offset": 2048, 00:15:21.807 "data_size": 63488 00:15:21.807 }, 00:15:21.807 { 00:15:21.807 "name": "BaseBdev3", 00:15:21.807 "uuid": "a795d40c-6890-4775-a95b-5e9adaa33eaf", 00:15:21.807 "is_configured": true, 00:15:21.807 "data_offset": 2048, 00:15:21.807 "data_size": 63488 00:15:21.807 }, 00:15:21.807 { 00:15:21.807 "name": "BaseBdev4", 00:15:21.807 "uuid": "1f0c7535-1408-4cfb-a7da-615f69dcd877", 00:15:21.807 "is_configured": true, 00:15:21.808 "data_offset": 2048, 00:15:21.808 "data_size": 63488 00:15:21.808 } 00:15:21.808 ] 00:15:21.808 }' 00:15:21.808 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.808 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.067 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:22.067 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:22.068 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:22.068 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:22.068 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:22.068 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:22.068 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:22.068 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.068 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.068 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:22.327 [2024-11-28 02:30:55.745973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.327 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.327 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:22.327 "name": "Existed_Raid", 00:15:22.327 "aliases": [ 00:15:22.327 "31edbabc-0a64-414e-a0c7-6ac5c9de0152" 00:15:22.327 ], 00:15:22.327 "product_name": "Raid Volume", 00:15:22.327 "block_size": 512, 00:15:22.327 "num_blocks": 190464, 00:15:22.327 "uuid": "31edbabc-0a64-414e-a0c7-6ac5c9de0152", 00:15:22.327 "assigned_rate_limits": { 00:15:22.327 "rw_ios_per_sec": 0, 00:15:22.327 "rw_mbytes_per_sec": 0, 00:15:22.327 "r_mbytes_per_sec": 0, 00:15:22.327 "w_mbytes_per_sec": 0 00:15:22.327 }, 00:15:22.327 "claimed": false, 00:15:22.327 "zoned": false, 00:15:22.327 "supported_io_types": { 00:15:22.327 "read": true, 00:15:22.327 "write": true, 00:15:22.327 "unmap": false, 00:15:22.327 "flush": false, 00:15:22.327 "reset": true, 00:15:22.327 "nvme_admin": false, 00:15:22.327 "nvme_io": false, 00:15:22.327 "nvme_io_md": false, 00:15:22.327 "write_zeroes": true, 00:15:22.327 "zcopy": false, 00:15:22.327 "get_zone_info": false, 00:15:22.327 "zone_management": false, 00:15:22.327 "zone_append": false, 00:15:22.327 "compare": false, 00:15:22.327 "compare_and_write": false, 00:15:22.327 "abort": false, 00:15:22.327 "seek_hole": false, 00:15:22.327 "seek_data": false, 00:15:22.327 "copy": false, 00:15:22.327 "nvme_iov_md": false 00:15:22.327 }, 00:15:22.327 "driver_specific": { 00:15:22.327 "raid": { 00:15:22.327 "uuid": "31edbabc-0a64-414e-a0c7-6ac5c9de0152", 00:15:22.327 "strip_size_kb": 64, 00:15:22.327 "state": "online", 00:15:22.327 "raid_level": "raid5f", 00:15:22.327 "superblock": true, 00:15:22.327 "num_base_bdevs": 4, 00:15:22.327 "num_base_bdevs_discovered": 4, 00:15:22.327 "num_base_bdevs_operational": 4, 00:15:22.327 "base_bdevs_list": [ 00:15:22.327 { 00:15:22.327 "name": "BaseBdev1", 00:15:22.327 "uuid": "f931d608-35c8-4859-a0d1-1cb7e6a2e545", 00:15:22.328 "is_configured": true, 00:15:22.328 "data_offset": 2048, 00:15:22.328 "data_size": 63488 00:15:22.328 }, 00:15:22.328 { 00:15:22.328 "name": "BaseBdev2", 00:15:22.328 "uuid": "1a46f172-fc09-4eae-9a23-e4abee76b43a", 00:15:22.328 "is_configured": true, 00:15:22.328 "data_offset": 2048, 00:15:22.328 "data_size": 63488 00:15:22.328 }, 00:15:22.328 { 00:15:22.328 "name": "BaseBdev3", 00:15:22.328 "uuid": "a795d40c-6890-4775-a95b-5e9adaa33eaf", 00:15:22.328 "is_configured": true, 00:15:22.328 "data_offset": 2048, 00:15:22.328 "data_size": 63488 00:15:22.328 }, 00:15:22.328 { 00:15:22.328 "name": "BaseBdev4", 00:15:22.328 "uuid": "1f0c7535-1408-4cfb-a7da-615f69dcd877", 00:15:22.328 "is_configured": true, 00:15:22.328 "data_offset": 2048, 00:15:22.328 "data_size": 63488 00:15:22.328 } 00:15:22.328 ] 00:15:22.328 } 00:15:22.328 } 00:15:22.328 }' 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:22.328 BaseBdev2 00:15:22.328 BaseBdev3 00:15:22.328 BaseBdev4' 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.328 02:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.588 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.589 [2024-11-28 02:30:56.065223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.589 "name": "Existed_Raid", 00:15:22.589 "uuid": "31edbabc-0a64-414e-a0c7-6ac5c9de0152", 00:15:22.589 "strip_size_kb": 64, 00:15:22.589 "state": "online", 00:15:22.589 "raid_level": "raid5f", 00:15:22.589 "superblock": true, 00:15:22.589 "num_base_bdevs": 4, 00:15:22.589 "num_base_bdevs_discovered": 3, 00:15:22.589 "num_base_bdevs_operational": 3, 00:15:22.589 "base_bdevs_list": [ 00:15:22.589 { 00:15:22.589 "name": null, 00:15:22.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.589 "is_configured": false, 00:15:22.589 "data_offset": 0, 00:15:22.589 "data_size": 63488 00:15:22.589 }, 00:15:22.589 { 00:15:22.589 "name": "BaseBdev2", 00:15:22.589 "uuid": "1a46f172-fc09-4eae-9a23-e4abee76b43a", 00:15:22.589 "is_configured": true, 00:15:22.589 "data_offset": 2048, 00:15:22.589 "data_size": 63488 00:15:22.589 }, 00:15:22.589 { 00:15:22.589 "name": "BaseBdev3", 00:15:22.589 "uuid": "a795d40c-6890-4775-a95b-5e9adaa33eaf", 00:15:22.589 "is_configured": true, 00:15:22.589 "data_offset": 2048, 00:15:22.589 "data_size": 63488 00:15:22.589 }, 00:15:22.589 { 00:15:22.589 "name": "BaseBdev4", 00:15:22.589 "uuid": "1f0c7535-1408-4cfb-a7da-615f69dcd877", 00:15:22.589 "is_configured": true, 00:15:22.589 "data_offset": 2048, 00:15:22.589 "data_size": 63488 00:15:22.589 } 00:15:22.589 ] 00:15:22.589 }' 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.589 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.159 [2024-11-28 02:30:56.632707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:23.159 [2024-11-28 02:30:56.632905] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.159 [2024-11-28 02:30:56.718911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.159 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.159 [2024-11-28 02:30:56.774842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.418 02:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.418 [2024-11-28 02:30:56.925097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:23.418 [2024-11-28 02:30:56.925190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.418 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.678 BaseBdev2 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.678 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.678 [ 00:15:23.678 { 00:15:23.678 "name": "BaseBdev2", 00:15:23.678 "aliases": [ 00:15:23.678 "1790b8d4-aa82-4d48-89b8-cad43b612d4d" 00:15:23.678 ], 00:15:23.678 "product_name": "Malloc disk", 00:15:23.678 "block_size": 512, 00:15:23.678 "num_blocks": 65536, 00:15:23.678 "uuid": "1790b8d4-aa82-4d48-89b8-cad43b612d4d", 00:15:23.678 "assigned_rate_limits": { 00:15:23.678 "rw_ios_per_sec": 0, 00:15:23.678 "rw_mbytes_per_sec": 0, 00:15:23.678 "r_mbytes_per_sec": 0, 00:15:23.678 "w_mbytes_per_sec": 0 00:15:23.678 }, 00:15:23.678 "claimed": false, 00:15:23.678 "zoned": false, 00:15:23.678 "supported_io_types": { 00:15:23.678 "read": true, 00:15:23.678 "write": true, 00:15:23.679 "unmap": true, 00:15:23.679 "flush": true, 00:15:23.679 "reset": true, 00:15:23.679 "nvme_admin": false, 00:15:23.679 "nvme_io": false, 00:15:23.679 "nvme_io_md": false, 00:15:23.679 "write_zeroes": true, 00:15:23.679 "zcopy": true, 00:15:23.679 "get_zone_info": false, 00:15:23.679 "zone_management": false, 00:15:23.679 "zone_append": false, 00:15:23.679 "compare": false, 00:15:23.679 "compare_and_write": false, 00:15:23.679 "abort": true, 00:15:23.679 "seek_hole": false, 00:15:23.679 "seek_data": false, 00:15:23.679 "copy": true, 00:15:23.679 "nvme_iov_md": false 00:15:23.679 }, 00:15:23.679 "memory_domains": [ 00:15:23.679 { 00:15:23.679 "dma_device_id": "system", 00:15:23.679 "dma_device_type": 1 00:15:23.679 }, 00:15:23.679 { 00:15:23.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.679 "dma_device_type": 2 00:15:23.679 } 00:15:23.679 ], 00:15:23.679 "driver_specific": {} 00:15:23.679 } 00:15:23.679 ] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.679 BaseBdev3 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.679 [ 00:15:23.679 { 00:15:23.679 "name": "BaseBdev3", 00:15:23.679 "aliases": [ 00:15:23.679 "0c2f45cd-d63d-4c21-8c19-422cf90e34b0" 00:15:23.679 ], 00:15:23.679 "product_name": "Malloc disk", 00:15:23.679 "block_size": 512, 00:15:23.679 "num_blocks": 65536, 00:15:23.679 "uuid": "0c2f45cd-d63d-4c21-8c19-422cf90e34b0", 00:15:23.679 "assigned_rate_limits": { 00:15:23.679 "rw_ios_per_sec": 0, 00:15:23.679 "rw_mbytes_per_sec": 0, 00:15:23.679 "r_mbytes_per_sec": 0, 00:15:23.679 "w_mbytes_per_sec": 0 00:15:23.679 }, 00:15:23.679 "claimed": false, 00:15:23.679 "zoned": false, 00:15:23.679 "supported_io_types": { 00:15:23.679 "read": true, 00:15:23.679 "write": true, 00:15:23.679 "unmap": true, 00:15:23.679 "flush": true, 00:15:23.679 "reset": true, 00:15:23.679 "nvme_admin": false, 00:15:23.679 "nvme_io": false, 00:15:23.679 "nvme_io_md": false, 00:15:23.679 "write_zeroes": true, 00:15:23.679 "zcopy": true, 00:15:23.679 "get_zone_info": false, 00:15:23.679 "zone_management": false, 00:15:23.679 "zone_append": false, 00:15:23.679 "compare": false, 00:15:23.679 "compare_and_write": false, 00:15:23.679 "abort": true, 00:15:23.679 "seek_hole": false, 00:15:23.679 "seek_data": false, 00:15:23.679 "copy": true, 00:15:23.679 "nvme_iov_md": false 00:15:23.679 }, 00:15:23.679 "memory_domains": [ 00:15:23.679 { 00:15:23.679 "dma_device_id": "system", 00:15:23.679 "dma_device_type": 1 00:15:23.679 }, 00:15:23.679 { 00:15:23.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.679 "dma_device_type": 2 00:15:23.679 } 00:15:23.679 ], 00:15:23.679 "driver_specific": {} 00:15:23.679 } 00:15:23.679 ] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.679 BaseBdev4 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.679 [ 00:15:23.679 { 00:15:23.679 "name": "BaseBdev4", 00:15:23.679 "aliases": [ 00:15:23.679 "ebc38b20-017a-492f-8dbf-038f69590508" 00:15:23.679 ], 00:15:23.679 "product_name": "Malloc disk", 00:15:23.679 "block_size": 512, 00:15:23.679 "num_blocks": 65536, 00:15:23.679 "uuid": "ebc38b20-017a-492f-8dbf-038f69590508", 00:15:23.679 "assigned_rate_limits": { 00:15:23.679 "rw_ios_per_sec": 0, 00:15:23.679 "rw_mbytes_per_sec": 0, 00:15:23.679 "r_mbytes_per_sec": 0, 00:15:23.679 "w_mbytes_per_sec": 0 00:15:23.679 }, 00:15:23.679 "claimed": false, 00:15:23.679 "zoned": false, 00:15:23.679 "supported_io_types": { 00:15:23.679 "read": true, 00:15:23.679 "write": true, 00:15:23.679 "unmap": true, 00:15:23.679 "flush": true, 00:15:23.679 "reset": true, 00:15:23.679 "nvme_admin": false, 00:15:23.679 "nvme_io": false, 00:15:23.679 "nvme_io_md": false, 00:15:23.679 "write_zeroes": true, 00:15:23.679 "zcopy": true, 00:15:23.679 "get_zone_info": false, 00:15:23.679 "zone_management": false, 00:15:23.679 "zone_append": false, 00:15:23.679 "compare": false, 00:15:23.679 "compare_and_write": false, 00:15:23.679 "abort": true, 00:15:23.679 "seek_hole": false, 00:15:23.679 "seek_data": false, 00:15:23.679 "copy": true, 00:15:23.679 "nvme_iov_md": false 00:15:23.679 }, 00:15:23.679 "memory_domains": [ 00:15:23.679 { 00:15:23.679 "dma_device_id": "system", 00:15:23.679 "dma_device_type": 1 00:15:23.679 }, 00:15:23.679 { 00:15:23.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.679 "dma_device_type": 2 00:15:23.679 } 00:15:23.679 ], 00:15:23.679 "driver_specific": {} 00:15:23.679 } 00:15:23.679 ] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.679 [2024-11-28 02:30:57.293621] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.679 [2024-11-28 02:30:57.293723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.679 [2024-11-28 02:30:57.293750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.679 [2024-11-28 02:30:57.295566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.679 [2024-11-28 02:30:57.295616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.679 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.680 "name": "Existed_Raid", 00:15:23.680 "uuid": "7debe69e-977f-4d43-b2df-2af45ea6bed4", 00:15:23.680 "strip_size_kb": 64, 00:15:23.680 "state": "configuring", 00:15:23.680 "raid_level": "raid5f", 00:15:23.680 "superblock": true, 00:15:23.680 "num_base_bdevs": 4, 00:15:23.680 "num_base_bdevs_discovered": 3, 00:15:23.680 "num_base_bdevs_operational": 4, 00:15:23.680 "base_bdevs_list": [ 00:15:23.680 { 00:15:23.680 "name": "BaseBdev1", 00:15:23.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.680 "is_configured": false, 00:15:23.680 "data_offset": 0, 00:15:23.680 "data_size": 0 00:15:23.680 }, 00:15:23.680 { 00:15:23.680 "name": "BaseBdev2", 00:15:23.680 "uuid": "1790b8d4-aa82-4d48-89b8-cad43b612d4d", 00:15:23.680 "is_configured": true, 00:15:23.680 "data_offset": 2048, 00:15:23.680 "data_size": 63488 00:15:23.680 }, 00:15:23.680 { 00:15:23.680 "name": "BaseBdev3", 00:15:23.680 "uuid": "0c2f45cd-d63d-4c21-8c19-422cf90e34b0", 00:15:23.680 "is_configured": true, 00:15:23.680 "data_offset": 2048, 00:15:23.680 "data_size": 63488 00:15:23.680 }, 00:15:23.680 { 00:15:23.680 "name": "BaseBdev4", 00:15:23.680 "uuid": "ebc38b20-017a-492f-8dbf-038f69590508", 00:15:23.680 "is_configured": true, 00:15:23.680 "data_offset": 2048, 00:15:23.680 "data_size": 63488 00:15:23.680 } 00:15:23.680 ] 00:15:23.680 }' 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.680 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.251 [2024-11-28 02:30:57.752807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.251 "name": "Existed_Raid", 00:15:24.251 "uuid": "7debe69e-977f-4d43-b2df-2af45ea6bed4", 00:15:24.251 "strip_size_kb": 64, 00:15:24.251 "state": "configuring", 00:15:24.251 "raid_level": "raid5f", 00:15:24.251 "superblock": true, 00:15:24.251 "num_base_bdevs": 4, 00:15:24.251 "num_base_bdevs_discovered": 2, 00:15:24.251 "num_base_bdevs_operational": 4, 00:15:24.251 "base_bdevs_list": [ 00:15:24.251 { 00:15:24.251 "name": "BaseBdev1", 00:15:24.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.251 "is_configured": false, 00:15:24.251 "data_offset": 0, 00:15:24.251 "data_size": 0 00:15:24.251 }, 00:15:24.251 { 00:15:24.251 "name": null, 00:15:24.251 "uuid": "1790b8d4-aa82-4d48-89b8-cad43b612d4d", 00:15:24.251 "is_configured": false, 00:15:24.251 "data_offset": 0, 00:15:24.251 "data_size": 63488 00:15:24.251 }, 00:15:24.251 { 00:15:24.251 "name": "BaseBdev3", 00:15:24.251 "uuid": "0c2f45cd-d63d-4c21-8c19-422cf90e34b0", 00:15:24.251 "is_configured": true, 00:15:24.251 "data_offset": 2048, 00:15:24.251 "data_size": 63488 00:15:24.251 }, 00:15:24.251 { 00:15:24.251 "name": "BaseBdev4", 00:15:24.251 "uuid": "ebc38b20-017a-492f-8dbf-038f69590508", 00:15:24.251 "is_configured": true, 00:15:24.251 "data_offset": 2048, 00:15:24.251 "data_size": 63488 00:15:24.251 } 00:15:24.251 ] 00:15:24.251 }' 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.251 02:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.512 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.512 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:24.512 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.512 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.512 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.772 [2024-11-28 02:30:58.243415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.772 BaseBdev1 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.772 [ 00:15:24.772 { 00:15:24.772 "name": "BaseBdev1", 00:15:24.772 "aliases": [ 00:15:24.772 "9ed80ac7-a6ac-4725-80f4-381609b597ef" 00:15:24.772 ], 00:15:24.772 "product_name": "Malloc disk", 00:15:24.772 "block_size": 512, 00:15:24.772 "num_blocks": 65536, 00:15:24.772 "uuid": "9ed80ac7-a6ac-4725-80f4-381609b597ef", 00:15:24.772 "assigned_rate_limits": { 00:15:24.772 "rw_ios_per_sec": 0, 00:15:24.772 "rw_mbytes_per_sec": 0, 00:15:24.772 "r_mbytes_per_sec": 0, 00:15:24.772 "w_mbytes_per_sec": 0 00:15:24.772 }, 00:15:24.772 "claimed": true, 00:15:24.772 "claim_type": "exclusive_write", 00:15:24.772 "zoned": false, 00:15:24.772 "supported_io_types": { 00:15:24.772 "read": true, 00:15:24.772 "write": true, 00:15:24.772 "unmap": true, 00:15:24.772 "flush": true, 00:15:24.772 "reset": true, 00:15:24.772 "nvme_admin": false, 00:15:24.772 "nvme_io": false, 00:15:24.772 "nvme_io_md": false, 00:15:24.772 "write_zeroes": true, 00:15:24.772 "zcopy": true, 00:15:24.772 "get_zone_info": false, 00:15:24.772 "zone_management": false, 00:15:24.772 "zone_append": false, 00:15:24.772 "compare": false, 00:15:24.772 "compare_and_write": false, 00:15:24.772 "abort": true, 00:15:24.772 "seek_hole": false, 00:15:24.772 "seek_data": false, 00:15:24.772 "copy": true, 00:15:24.772 "nvme_iov_md": false 00:15:24.772 }, 00:15:24.772 "memory_domains": [ 00:15:24.772 { 00:15:24.772 "dma_device_id": "system", 00:15:24.772 "dma_device_type": 1 00:15:24.772 }, 00:15:24.772 { 00:15:24.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.772 "dma_device_type": 2 00:15:24.772 } 00:15:24.772 ], 00:15:24.772 "driver_specific": {} 00:15:24.772 } 00:15:24.772 ] 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.772 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.772 "name": "Existed_Raid", 00:15:24.772 "uuid": "7debe69e-977f-4d43-b2df-2af45ea6bed4", 00:15:24.772 "strip_size_kb": 64, 00:15:24.772 "state": "configuring", 00:15:24.772 "raid_level": "raid5f", 00:15:24.772 "superblock": true, 00:15:24.772 "num_base_bdevs": 4, 00:15:24.772 "num_base_bdevs_discovered": 3, 00:15:24.772 "num_base_bdevs_operational": 4, 00:15:24.772 "base_bdevs_list": [ 00:15:24.772 { 00:15:24.772 "name": "BaseBdev1", 00:15:24.772 "uuid": "9ed80ac7-a6ac-4725-80f4-381609b597ef", 00:15:24.772 "is_configured": true, 00:15:24.772 "data_offset": 2048, 00:15:24.772 "data_size": 63488 00:15:24.772 }, 00:15:24.772 { 00:15:24.772 "name": null, 00:15:24.772 "uuid": "1790b8d4-aa82-4d48-89b8-cad43b612d4d", 00:15:24.772 "is_configured": false, 00:15:24.772 "data_offset": 0, 00:15:24.772 "data_size": 63488 00:15:24.772 }, 00:15:24.772 { 00:15:24.772 "name": "BaseBdev3", 00:15:24.772 "uuid": "0c2f45cd-d63d-4c21-8c19-422cf90e34b0", 00:15:24.772 "is_configured": true, 00:15:24.772 "data_offset": 2048, 00:15:24.772 "data_size": 63488 00:15:24.773 }, 00:15:24.773 { 00:15:24.773 "name": "BaseBdev4", 00:15:24.773 "uuid": "ebc38b20-017a-492f-8dbf-038f69590508", 00:15:24.773 "is_configured": true, 00:15:24.773 "data_offset": 2048, 00:15:24.773 "data_size": 63488 00:15:24.773 } 00:15:24.773 ] 00:15:24.773 }' 00:15:24.773 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.773 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.032 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:25.032 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.032 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.032 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.032 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.033 [2024-11-28 02:30:58.698692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.033 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.293 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.293 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.293 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.293 "name": "Existed_Raid", 00:15:25.293 "uuid": "7debe69e-977f-4d43-b2df-2af45ea6bed4", 00:15:25.293 "strip_size_kb": 64, 00:15:25.293 "state": "configuring", 00:15:25.293 "raid_level": "raid5f", 00:15:25.293 "superblock": true, 00:15:25.293 "num_base_bdevs": 4, 00:15:25.293 "num_base_bdevs_discovered": 2, 00:15:25.293 "num_base_bdevs_operational": 4, 00:15:25.293 "base_bdevs_list": [ 00:15:25.293 { 00:15:25.293 "name": "BaseBdev1", 00:15:25.293 "uuid": "9ed80ac7-a6ac-4725-80f4-381609b597ef", 00:15:25.293 "is_configured": true, 00:15:25.293 "data_offset": 2048, 00:15:25.293 "data_size": 63488 00:15:25.293 }, 00:15:25.293 { 00:15:25.293 "name": null, 00:15:25.293 "uuid": "1790b8d4-aa82-4d48-89b8-cad43b612d4d", 00:15:25.293 "is_configured": false, 00:15:25.293 "data_offset": 0, 00:15:25.293 "data_size": 63488 00:15:25.293 }, 00:15:25.293 { 00:15:25.293 "name": null, 00:15:25.293 "uuid": "0c2f45cd-d63d-4c21-8c19-422cf90e34b0", 00:15:25.293 "is_configured": false, 00:15:25.293 "data_offset": 0, 00:15:25.293 "data_size": 63488 00:15:25.293 }, 00:15:25.293 { 00:15:25.293 "name": "BaseBdev4", 00:15:25.293 "uuid": "ebc38b20-017a-492f-8dbf-038f69590508", 00:15:25.293 "is_configured": true, 00:15:25.293 "data_offset": 2048, 00:15:25.293 "data_size": 63488 00:15:25.293 } 00:15:25.293 ] 00:15:25.293 }' 00:15:25.293 02:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.293 02:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.553 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.553 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.553 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.553 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:25.553 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.813 [2024-11-28 02:30:59.241747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.813 "name": "Existed_Raid", 00:15:25.813 "uuid": "7debe69e-977f-4d43-b2df-2af45ea6bed4", 00:15:25.813 "strip_size_kb": 64, 00:15:25.813 "state": "configuring", 00:15:25.813 "raid_level": "raid5f", 00:15:25.813 "superblock": true, 00:15:25.813 "num_base_bdevs": 4, 00:15:25.813 "num_base_bdevs_discovered": 3, 00:15:25.813 "num_base_bdevs_operational": 4, 00:15:25.813 "base_bdevs_list": [ 00:15:25.813 { 00:15:25.813 "name": "BaseBdev1", 00:15:25.813 "uuid": "9ed80ac7-a6ac-4725-80f4-381609b597ef", 00:15:25.813 "is_configured": true, 00:15:25.813 "data_offset": 2048, 00:15:25.813 "data_size": 63488 00:15:25.813 }, 00:15:25.813 { 00:15:25.813 "name": null, 00:15:25.813 "uuid": "1790b8d4-aa82-4d48-89b8-cad43b612d4d", 00:15:25.813 "is_configured": false, 00:15:25.813 "data_offset": 0, 00:15:25.813 "data_size": 63488 00:15:25.813 }, 00:15:25.813 { 00:15:25.813 "name": "BaseBdev3", 00:15:25.813 "uuid": "0c2f45cd-d63d-4c21-8c19-422cf90e34b0", 00:15:25.813 "is_configured": true, 00:15:25.813 "data_offset": 2048, 00:15:25.813 "data_size": 63488 00:15:25.813 }, 00:15:25.813 { 00:15:25.813 "name": "BaseBdev4", 00:15:25.813 "uuid": "ebc38b20-017a-492f-8dbf-038f69590508", 00:15:25.813 "is_configured": true, 00:15:25.813 "data_offset": 2048, 00:15:25.813 "data_size": 63488 00:15:25.813 } 00:15:25.813 ] 00:15:25.813 }' 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.813 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.073 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.073 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.073 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.073 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:26.073 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.073 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:26.073 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:26.073 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.073 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.073 [2024-11-28 02:30:59.704986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.333 "name": "Existed_Raid", 00:15:26.333 "uuid": "7debe69e-977f-4d43-b2df-2af45ea6bed4", 00:15:26.333 "strip_size_kb": 64, 00:15:26.333 "state": "configuring", 00:15:26.333 "raid_level": "raid5f", 00:15:26.333 "superblock": true, 00:15:26.333 "num_base_bdevs": 4, 00:15:26.333 "num_base_bdevs_discovered": 2, 00:15:26.333 "num_base_bdevs_operational": 4, 00:15:26.333 "base_bdevs_list": [ 00:15:26.333 { 00:15:26.333 "name": null, 00:15:26.333 "uuid": "9ed80ac7-a6ac-4725-80f4-381609b597ef", 00:15:26.333 "is_configured": false, 00:15:26.333 "data_offset": 0, 00:15:26.333 "data_size": 63488 00:15:26.333 }, 00:15:26.333 { 00:15:26.333 "name": null, 00:15:26.333 "uuid": "1790b8d4-aa82-4d48-89b8-cad43b612d4d", 00:15:26.333 "is_configured": false, 00:15:26.333 "data_offset": 0, 00:15:26.333 "data_size": 63488 00:15:26.333 }, 00:15:26.333 { 00:15:26.333 "name": "BaseBdev3", 00:15:26.333 "uuid": "0c2f45cd-d63d-4c21-8c19-422cf90e34b0", 00:15:26.333 "is_configured": true, 00:15:26.333 "data_offset": 2048, 00:15:26.333 "data_size": 63488 00:15:26.333 }, 00:15:26.333 { 00:15:26.333 "name": "BaseBdev4", 00:15:26.333 "uuid": "ebc38b20-017a-492f-8dbf-038f69590508", 00:15:26.333 "is_configured": true, 00:15:26.333 "data_offset": 2048, 00:15:26.333 "data_size": 63488 00:15:26.333 } 00:15:26.333 ] 00:15:26.333 }' 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.333 02:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.593 [2024-11-28 02:31:00.261569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.593 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.853 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.853 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.853 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.853 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.853 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.853 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.853 "name": "Existed_Raid", 00:15:26.853 "uuid": "7debe69e-977f-4d43-b2df-2af45ea6bed4", 00:15:26.853 "strip_size_kb": 64, 00:15:26.853 "state": "configuring", 00:15:26.853 "raid_level": "raid5f", 00:15:26.853 "superblock": true, 00:15:26.853 "num_base_bdevs": 4, 00:15:26.853 "num_base_bdevs_discovered": 3, 00:15:26.853 "num_base_bdevs_operational": 4, 00:15:26.853 "base_bdevs_list": [ 00:15:26.853 { 00:15:26.853 "name": null, 00:15:26.853 "uuid": "9ed80ac7-a6ac-4725-80f4-381609b597ef", 00:15:26.853 "is_configured": false, 00:15:26.853 "data_offset": 0, 00:15:26.853 "data_size": 63488 00:15:26.853 }, 00:15:26.853 { 00:15:26.853 "name": "BaseBdev2", 00:15:26.853 "uuid": "1790b8d4-aa82-4d48-89b8-cad43b612d4d", 00:15:26.853 "is_configured": true, 00:15:26.853 "data_offset": 2048, 00:15:26.853 "data_size": 63488 00:15:26.853 }, 00:15:26.853 { 00:15:26.853 "name": "BaseBdev3", 00:15:26.853 "uuid": "0c2f45cd-d63d-4c21-8c19-422cf90e34b0", 00:15:26.853 "is_configured": true, 00:15:26.853 "data_offset": 2048, 00:15:26.853 "data_size": 63488 00:15:26.853 }, 00:15:26.853 { 00:15:26.853 "name": "BaseBdev4", 00:15:26.853 "uuid": "ebc38b20-017a-492f-8dbf-038f69590508", 00:15:26.853 "is_configured": true, 00:15:26.853 "data_offset": 2048, 00:15:26.853 "data_size": 63488 00:15:26.853 } 00:15:26.853 ] 00:15:26.853 }' 00:15:26.853 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.853 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9ed80ac7-a6ac-4725-80f4-381609b597ef 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.113 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.374 [2024-11-28 02:31:00.814773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:27.374 [2024-11-28 02:31:00.815017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:27.374 [2024-11-28 02:31:00.815031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:27.374 [2024-11-28 02:31:00.815272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:27.374 NewBaseBdev 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.374 [2024-11-28 02:31:00.821781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:27.374 [2024-11-28 02:31:00.821846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:27.374 [2024-11-28 02:31:00.822063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.374 [ 00:15:27.374 { 00:15:27.374 "name": "NewBaseBdev", 00:15:27.374 "aliases": [ 00:15:27.374 "9ed80ac7-a6ac-4725-80f4-381609b597ef" 00:15:27.374 ], 00:15:27.374 "product_name": "Malloc disk", 00:15:27.374 "block_size": 512, 00:15:27.374 "num_blocks": 65536, 00:15:27.374 "uuid": "9ed80ac7-a6ac-4725-80f4-381609b597ef", 00:15:27.374 "assigned_rate_limits": { 00:15:27.374 "rw_ios_per_sec": 0, 00:15:27.374 "rw_mbytes_per_sec": 0, 00:15:27.374 "r_mbytes_per_sec": 0, 00:15:27.374 "w_mbytes_per_sec": 0 00:15:27.374 }, 00:15:27.374 "claimed": true, 00:15:27.374 "claim_type": "exclusive_write", 00:15:27.374 "zoned": false, 00:15:27.374 "supported_io_types": { 00:15:27.374 "read": true, 00:15:27.374 "write": true, 00:15:27.374 "unmap": true, 00:15:27.374 "flush": true, 00:15:27.374 "reset": true, 00:15:27.374 "nvme_admin": false, 00:15:27.374 "nvme_io": false, 00:15:27.374 "nvme_io_md": false, 00:15:27.374 "write_zeroes": true, 00:15:27.374 "zcopy": true, 00:15:27.374 "get_zone_info": false, 00:15:27.374 "zone_management": false, 00:15:27.374 "zone_append": false, 00:15:27.374 "compare": false, 00:15:27.374 "compare_and_write": false, 00:15:27.374 "abort": true, 00:15:27.374 "seek_hole": false, 00:15:27.374 "seek_data": false, 00:15:27.374 "copy": true, 00:15:27.374 "nvme_iov_md": false 00:15:27.374 }, 00:15:27.374 "memory_domains": [ 00:15:27.374 { 00:15:27.374 "dma_device_id": "system", 00:15:27.374 "dma_device_type": 1 00:15:27.374 }, 00:15:27.374 { 00:15:27.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.374 "dma_device_type": 2 00:15:27.374 } 00:15:27.374 ], 00:15:27.374 "driver_specific": {} 00:15:27.374 } 00:15:27.374 ] 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.374 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.375 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.375 "name": "Existed_Raid", 00:15:27.375 "uuid": "7debe69e-977f-4d43-b2df-2af45ea6bed4", 00:15:27.375 "strip_size_kb": 64, 00:15:27.375 "state": "online", 00:15:27.375 "raid_level": "raid5f", 00:15:27.375 "superblock": true, 00:15:27.375 "num_base_bdevs": 4, 00:15:27.375 "num_base_bdevs_discovered": 4, 00:15:27.375 "num_base_bdevs_operational": 4, 00:15:27.375 "base_bdevs_list": [ 00:15:27.375 { 00:15:27.375 "name": "NewBaseBdev", 00:15:27.375 "uuid": "9ed80ac7-a6ac-4725-80f4-381609b597ef", 00:15:27.375 "is_configured": true, 00:15:27.375 "data_offset": 2048, 00:15:27.375 "data_size": 63488 00:15:27.375 }, 00:15:27.375 { 00:15:27.375 "name": "BaseBdev2", 00:15:27.375 "uuid": "1790b8d4-aa82-4d48-89b8-cad43b612d4d", 00:15:27.375 "is_configured": true, 00:15:27.375 "data_offset": 2048, 00:15:27.375 "data_size": 63488 00:15:27.375 }, 00:15:27.375 { 00:15:27.375 "name": "BaseBdev3", 00:15:27.375 "uuid": "0c2f45cd-d63d-4c21-8c19-422cf90e34b0", 00:15:27.375 "is_configured": true, 00:15:27.375 "data_offset": 2048, 00:15:27.375 "data_size": 63488 00:15:27.375 }, 00:15:27.375 { 00:15:27.375 "name": "BaseBdev4", 00:15:27.375 "uuid": "ebc38b20-017a-492f-8dbf-038f69590508", 00:15:27.375 "is_configured": true, 00:15:27.375 "data_offset": 2048, 00:15:27.375 "data_size": 63488 00:15:27.375 } 00:15:27.375 ] 00:15:27.375 }' 00:15:27.375 02:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.375 02:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.945 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:27.945 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:27.945 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.945 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.945 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.945 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.945 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:27.945 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.946 [2024-11-28 02:31:01.329437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.946 "name": "Existed_Raid", 00:15:27.946 "aliases": [ 00:15:27.946 "7debe69e-977f-4d43-b2df-2af45ea6bed4" 00:15:27.946 ], 00:15:27.946 "product_name": "Raid Volume", 00:15:27.946 "block_size": 512, 00:15:27.946 "num_blocks": 190464, 00:15:27.946 "uuid": "7debe69e-977f-4d43-b2df-2af45ea6bed4", 00:15:27.946 "assigned_rate_limits": { 00:15:27.946 "rw_ios_per_sec": 0, 00:15:27.946 "rw_mbytes_per_sec": 0, 00:15:27.946 "r_mbytes_per_sec": 0, 00:15:27.946 "w_mbytes_per_sec": 0 00:15:27.946 }, 00:15:27.946 "claimed": false, 00:15:27.946 "zoned": false, 00:15:27.946 "supported_io_types": { 00:15:27.946 "read": true, 00:15:27.946 "write": true, 00:15:27.946 "unmap": false, 00:15:27.946 "flush": false, 00:15:27.946 "reset": true, 00:15:27.946 "nvme_admin": false, 00:15:27.946 "nvme_io": false, 00:15:27.946 "nvme_io_md": false, 00:15:27.946 "write_zeroes": true, 00:15:27.946 "zcopy": false, 00:15:27.946 "get_zone_info": false, 00:15:27.946 "zone_management": false, 00:15:27.946 "zone_append": false, 00:15:27.946 "compare": false, 00:15:27.946 "compare_and_write": false, 00:15:27.946 "abort": false, 00:15:27.946 "seek_hole": false, 00:15:27.946 "seek_data": false, 00:15:27.946 "copy": false, 00:15:27.946 "nvme_iov_md": false 00:15:27.946 }, 00:15:27.946 "driver_specific": { 00:15:27.946 "raid": { 00:15:27.946 "uuid": "7debe69e-977f-4d43-b2df-2af45ea6bed4", 00:15:27.946 "strip_size_kb": 64, 00:15:27.946 "state": "online", 00:15:27.946 "raid_level": "raid5f", 00:15:27.946 "superblock": true, 00:15:27.946 "num_base_bdevs": 4, 00:15:27.946 "num_base_bdevs_discovered": 4, 00:15:27.946 "num_base_bdevs_operational": 4, 00:15:27.946 "base_bdevs_list": [ 00:15:27.946 { 00:15:27.946 "name": "NewBaseBdev", 00:15:27.946 "uuid": "9ed80ac7-a6ac-4725-80f4-381609b597ef", 00:15:27.946 "is_configured": true, 00:15:27.946 "data_offset": 2048, 00:15:27.946 "data_size": 63488 00:15:27.946 }, 00:15:27.946 { 00:15:27.946 "name": "BaseBdev2", 00:15:27.946 "uuid": "1790b8d4-aa82-4d48-89b8-cad43b612d4d", 00:15:27.946 "is_configured": true, 00:15:27.946 "data_offset": 2048, 00:15:27.946 "data_size": 63488 00:15:27.946 }, 00:15:27.946 { 00:15:27.946 "name": "BaseBdev3", 00:15:27.946 "uuid": "0c2f45cd-d63d-4c21-8c19-422cf90e34b0", 00:15:27.946 "is_configured": true, 00:15:27.946 "data_offset": 2048, 00:15:27.946 "data_size": 63488 00:15:27.946 }, 00:15:27.946 { 00:15:27.946 "name": "BaseBdev4", 00:15:27.946 "uuid": "ebc38b20-017a-492f-8dbf-038f69590508", 00:15:27.946 "is_configured": true, 00:15:27.946 "data_offset": 2048, 00:15:27.946 "data_size": 63488 00:15:27.946 } 00:15:27.946 ] 00:15:27.946 } 00:15:27.946 } 00:15:27.946 }' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:27.946 BaseBdev2 00:15:27.946 BaseBdev3 00:15:27.946 BaseBdev4' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.946 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.206 [2024-11-28 02:31:01.676595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.206 [2024-11-28 02:31:01.676665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.206 [2024-11-28 02:31:01.676750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.206 [2024-11-28 02:31:01.677059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.206 [2024-11-28 02:31:01.677112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83189 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83189 ']' 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83189 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83189 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.206 killing process with pid 83189 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83189' 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83189 00:15:28.206 [2024-11-28 02:31:01.724623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.206 02:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83189 00:15:28.466 [2024-11-28 02:31:02.085508] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:29.967 02:31:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:29.967 00:15:29.967 real 0m11.251s 00:15:29.967 user 0m17.913s 00:15:29.967 sys 0m2.045s 00:15:29.967 02:31:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.967 ************************************ 00:15:29.967 END TEST raid5f_state_function_test_sb 00:15:29.967 ************************************ 00:15:29.967 02:31:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.967 02:31:03 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:29.967 02:31:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:29.967 02:31:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.967 02:31:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:29.967 ************************************ 00:15:29.967 START TEST raid5f_superblock_test 00:15:29.967 ************************************ 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83854 00:15:29.967 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:29.968 02:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83854 00:15:29.968 02:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83854 ']' 00:15:29.968 02:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.968 02:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.968 02:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.968 02:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.968 02:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.968 [2024-11-28 02:31:03.293130] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:29.968 [2024-11-28 02:31:03.293257] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83854 ] 00:15:29.968 [2024-11-28 02:31:03.467551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.968 [2024-11-28 02:31:03.579278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.228 [2024-11-28 02:31:03.766892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.228 [2024-11-28 02:31:03.766953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.488 malloc1 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.488 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.488 [2024-11-28 02:31:04.155592] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:30.488 [2024-11-28 02:31:04.155690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.489 [2024-11-28 02:31:04.155729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:30.489 [2024-11-28 02:31:04.155757] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.489 [2024-11-28 02:31:04.157837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.489 [2024-11-28 02:31:04.157909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:30.489 pt1 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.489 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.749 malloc2 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.749 [2024-11-28 02:31:04.215018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:30.749 [2024-11-28 02:31:04.215125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.749 [2024-11-28 02:31:04.215168] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:30.749 [2024-11-28 02:31:04.215197] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.749 [2024-11-28 02:31:04.217189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.749 [2024-11-28 02:31:04.217252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:30.749 pt2 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.749 malloc3 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.749 [2024-11-28 02:31:04.305745] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:30.749 [2024-11-28 02:31:04.305793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.749 [2024-11-28 02:31:04.305811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:30.749 [2024-11-28 02:31:04.305820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.749 [2024-11-28 02:31:04.307796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.749 [2024-11-28 02:31:04.307831] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:30.749 pt3 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.749 malloc4 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.749 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 [2024-11-28 02:31:04.359506] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:30.750 [2024-11-28 02:31:04.359596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.750 [2024-11-28 02:31:04.359648] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:30.750 [2024-11-28 02:31:04.359676] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.750 [2024-11-28 02:31:04.361636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.750 [2024-11-28 02:31:04.361718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:30.750 pt4 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 [2024-11-28 02:31:04.371513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:30.750 [2024-11-28 02:31:04.373278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:30.750 [2024-11-28 02:31:04.373397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:30.750 [2024-11-28 02:31:04.373479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:30.750 [2024-11-28 02:31:04.373690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:30.750 [2024-11-28 02:31:04.373738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:30.750 [2024-11-28 02:31:04.373993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:30.750 [2024-11-28 02:31:04.380776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:30.750 [2024-11-28 02:31:04.380833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:30.750 [2024-11-28 02:31:04.381044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.010 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.010 "name": "raid_bdev1", 00:15:31.010 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:31.010 "strip_size_kb": 64, 00:15:31.010 "state": "online", 00:15:31.010 "raid_level": "raid5f", 00:15:31.010 "superblock": true, 00:15:31.010 "num_base_bdevs": 4, 00:15:31.010 "num_base_bdevs_discovered": 4, 00:15:31.010 "num_base_bdevs_operational": 4, 00:15:31.010 "base_bdevs_list": [ 00:15:31.010 { 00:15:31.010 "name": "pt1", 00:15:31.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.010 "is_configured": true, 00:15:31.010 "data_offset": 2048, 00:15:31.010 "data_size": 63488 00:15:31.010 }, 00:15:31.010 { 00:15:31.010 "name": "pt2", 00:15:31.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.010 "is_configured": true, 00:15:31.010 "data_offset": 2048, 00:15:31.010 "data_size": 63488 00:15:31.010 }, 00:15:31.010 { 00:15:31.010 "name": "pt3", 00:15:31.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:31.010 "is_configured": true, 00:15:31.010 "data_offset": 2048, 00:15:31.010 "data_size": 63488 00:15:31.010 }, 00:15:31.010 { 00:15:31.010 "name": "pt4", 00:15:31.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:31.010 "is_configured": true, 00:15:31.010 "data_offset": 2048, 00:15:31.010 "data_size": 63488 00:15:31.010 } 00:15:31.010 ] 00:15:31.010 }' 00:15:31.010 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.010 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.270 [2024-11-28 02:31:04.840979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.270 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.270 "name": "raid_bdev1", 00:15:31.270 "aliases": [ 00:15:31.270 "272a793b-42eb-49b7-873a-44d5e7a8ca42" 00:15:31.270 ], 00:15:31.270 "product_name": "Raid Volume", 00:15:31.270 "block_size": 512, 00:15:31.270 "num_blocks": 190464, 00:15:31.270 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:31.270 "assigned_rate_limits": { 00:15:31.270 "rw_ios_per_sec": 0, 00:15:31.270 "rw_mbytes_per_sec": 0, 00:15:31.270 "r_mbytes_per_sec": 0, 00:15:31.270 "w_mbytes_per_sec": 0 00:15:31.270 }, 00:15:31.270 "claimed": false, 00:15:31.270 "zoned": false, 00:15:31.270 "supported_io_types": { 00:15:31.270 "read": true, 00:15:31.270 "write": true, 00:15:31.270 "unmap": false, 00:15:31.270 "flush": false, 00:15:31.270 "reset": true, 00:15:31.270 "nvme_admin": false, 00:15:31.270 "nvme_io": false, 00:15:31.270 "nvme_io_md": false, 00:15:31.270 "write_zeroes": true, 00:15:31.270 "zcopy": false, 00:15:31.270 "get_zone_info": false, 00:15:31.270 "zone_management": false, 00:15:31.270 "zone_append": false, 00:15:31.270 "compare": false, 00:15:31.270 "compare_and_write": false, 00:15:31.270 "abort": false, 00:15:31.270 "seek_hole": false, 00:15:31.270 "seek_data": false, 00:15:31.270 "copy": false, 00:15:31.270 "nvme_iov_md": false 00:15:31.270 }, 00:15:31.270 "driver_specific": { 00:15:31.270 "raid": { 00:15:31.270 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:31.270 "strip_size_kb": 64, 00:15:31.270 "state": "online", 00:15:31.270 "raid_level": "raid5f", 00:15:31.270 "superblock": true, 00:15:31.270 "num_base_bdevs": 4, 00:15:31.270 "num_base_bdevs_discovered": 4, 00:15:31.270 "num_base_bdevs_operational": 4, 00:15:31.270 "base_bdevs_list": [ 00:15:31.270 { 00:15:31.270 "name": "pt1", 00:15:31.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.270 "is_configured": true, 00:15:31.270 "data_offset": 2048, 00:15:31.270 "data_size": 63488 00:15:31.270 }, 00:15:31.270 { 00:15:31.270 "name": "pt2", 00:15:31.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.270 "is_configured": true, 00:15:31.270 "data_offset": 2048, 00:15:31.270 "data_size": 63488 00:15:31.270 }, 00:15:31.270 { 00:15:31.270 "name": "pt3", 00:15:31.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:31.270 "is_configured": true, 00:15:31.270 "data_offset": 2048, 00:15:31.270 "data_size": 63488 00:15:31.270 }, 00:15:31.270 { 00:15:31.270 "name": "pt4", 00:15:31.271 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:31.271 "is_configured": true, 00:15:31.271 "data_offset": 2048, 00:15:31.271 "data_size": 63488 00:15:31.271 } 00:15:31.271 ] 00:15:31.271 } 00:15:31.271 } 00:15:31.271 }' 00:15:31.271 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.271 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:31.271 pt2 00:15:31.271 pt3 00:15:31.271 pt4' 00:15:31.271 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.531 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:31.531 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.531 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:31.531 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.531 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.531 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.531 02:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.531 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.531 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.531 02:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:31.531 [2024-11-28 02:31:05.156352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=272a793b-42eb-49b7-873a-44d5e7a8ca42 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 272a793b-42eb-49b7-873a-44d5e7a8ca42 ']' 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.531 [2024-11-28 02:31:05.196140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.531 [2024-11-28 02:31:05.196205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.531 [2024-11-28 02:31:05.196279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.531 [2024-11-28 02:31:05.196361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.531 [2024-11-28 02:31:05.196375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:31.531 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.792 [2024-11-28 02:31:05.359950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:31.792 [2024-11-28 02:31:05.361669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:31.792 [2024-11-28 02:31:05.361708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:31.792 [2024-11-28 02:31:05.361738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:31.792 [2024-11-28 02:31:05.361782] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:31.792 [2024-11-28 02:31:05.361825] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:31.792 [2024-11-28 02:31:05.361843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:31.792 [2024-11-28 02:31:05.361860] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:31.792 [2024-11-28 02:31:05.361872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.792 [2024-11-28 02:31:05.361881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:31.792 request: 00:15:31.792 { 00:15:31.792 "name": "raid_bdev1", 00:15:31.792 "raid_level": "raid5f", 00:15:31.792 "base_bdevs": [ 00:15:31.792 "malloc1", 00:15:31.792 "malloc2", 00:15:31.792 "malloc3", 00:15:31.792 "malloc4" 00:15:31.792 ], 00:15:31.792 "strip_size_kb": 64, 00:15:31.792 "superblock": false, 00:15:31.792 "method": "bdev_raid_create", 00:15:31.792 "req_id": 1 00:15:31.792 } 00:15:31.792 Got JSON-RPC error response 00:15:31.792 response: 00:15:31.792 { 00:15:31.792 "code": -17, 00:15:31.792 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:31.792 } 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.792 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.792 [2024-11-28 02:31:05.423798] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:31.792 [2024-11-28 02:31:05.423843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.792 [2024-11-28 02:31:05.423858] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:31.792 [2024-11-28 02:31:05.423868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.792 [2024-11-28 02:31:05.425906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.792 [2024-11-28 02:31:05.425952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:31.793 [2024-11-28 02:31:05.426013] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:31.793 [2024-11-28 02:31:05.426061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:31.793 pt1 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.793 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.052 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.052 "name": "raid_bdev1", 00:15:32.052 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:32.052 "strip_size_kb": 64, 00:15:32.052 "state": "configuring", 00:15:32.052 "raid_level": "raid5f", 00:15:32.052 "superblock": true, 00:15:32.052 "num_base_bdevs": 4, 00:15:32.052 "num_base_bdevs_discovered": 1, 00:15:32.053 "num_base_bdevs_operational": 4, 00:15:32.053 "base_bdevs_list": [ 00:15:32.053 { 00:15:32.053 "name": "pt1", 00:15:32.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.053 "is_configured": true, 00:15:32.053 "data_offset": 2048, 00:15:32.053 "data_size": 63488 00:15:32.053 }, 00:15:32.053 { 00:15:32.053 "name": null, 00:15:32.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.053 "is_configured": false, 00:15:32.053 "data_offset": 2048, 00:15:32.053 "data_size": 63488 00:15:32.053 }, 00:15:32.053 { 00:15:32.053 "name": null, 00:15:32.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:32.053 "is_configured": false, 00:15:32.053 "data_offset": 2048, 00:15:32.053 "data_size": 63488 00:15:32.053 }, 00:15:32.053 { 00:15:32.053 "name": null, 00:15:32.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:32.053 "is_configured": false, 00:15:32.053 "data_offset": 2048, 00:15:32.053 "data_size": 63488 00:15:32.053 } 00:15:32.053 ] 00:15:32.053 }' 00:15:32.053 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.053 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.313 [2024-11-28 02:31:05.831130] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:32.313 [2024-11-28 02:31:05.831234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.313 [2024-11-28 02:31:05.831268] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:32.313 [2024-11-28 02:31:05.831297] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.313 [2024-11-28 02:31:05.831709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.313 [2024-11-28 02:31:05.831765] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:32.313 [2024-11-28 02:31:05.831866] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:32.313 [2024-11-28 02:31:05.831915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:32.313 pt2 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.313 [2024-11-28 02:31:05.843114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.313 "name": "raid_bdev1", 00:15:32.313 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:32.313 "strip_size_kb": 64, 00:15:32.313 "state": "configuring", 00:15:32.313 "raid_level": "raid5f", 00:15:32.313 "superblock": true, 00:15:32.313 "num_base_bdevs": 4, 00:15:32.313 "num_base_bdevs_discovered": 1, 00:15:32.313 "num_base_bdevs_operational": 4, 00:15:32.313 "base_bdevs_list": [ 00:15:32.313 { 00:15:32.313 "name": "pt1", 00:15:32.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.313 "is_configured": true, 00:15:32.313 "data_offset": 2048, 00:15:32.313 "data_size": 63488 00:15:32.313 }, 00:15:32.313 { 00:15:32.313 "name": null, 00:15:32.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.313 "is_configured": false, 00:15:32.313 "data_offset": 0, 00:15:32.313 "data_size": 63488 00:15:32.313 }, 00:15:32.313 { 00:15:32.313 "name": null, 00:15:32.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:32.313 "is_configured": false, 00:15:32.313 "data_offset": 2048, 00:15:32.313 "data_size": 63488 00:15:32.313 }, 00:15:32.313 { 00:15:32.313 "name": null, 00:15:32.313 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:32.313 "is_configured": false, 00:15:32.313 "data_offset": 2048, 00:15:32.313 "data_size": 63488 00:15:32.313 } 00:15:32.313 ] 00:15:32.313 }' 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.313 02:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.573 [2024-11-28 02:31:06.238417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:32.573 [2024-11-28 02:31:06.238476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.573 [2024-11-28 02:31:06.238494] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:32.573 [2024-11-28 02:31:06.238503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.573 [2024-11-28 02:31:06.238934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.573 [2024-11-28 02:31:06.238951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:32.573 [2024-11-28 02:31:06.239022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:32.573 [2024-11-28 02:31:06.239042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:32.573 pt2 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.573 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.573 [2024-11-28 02:31:06.250391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:32.573 [2024-11-28 02:31:06.250434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.573 [2024-11-28 02:31:06.250457] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:32.573 [2024-11-28 02:31:06.250467] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.573 [2024-11-28 02:31:06.250797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.573 [2024-11-28 02:31:06.250812] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:32.573 [2024-11-28 02:31:06.250867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:32.573 [2024-11-28 02:31:06.250889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:32.834 pt3 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.834 [2024-11-28 02:31:06.262340] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:32.834 [2024-11-28 02:31:06.262377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.834 [2024-11-28 02:31:06.262408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:32.834 [2024-11-28 02:31:06.262415] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.834 [2024-11-28 02:31:06.262749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.834 [2024-11-28 02:31:06.262764] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:32.834 [2024-11-28 02:31:06.262815] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:32.834 [2024-11-28 02:31:06.262833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:32.834 [2024-11-28 02:31:06.262967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:32.834 [2024-11-28 02:31:06.262975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:32.834 [2024-11-28 02:31:06.263200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:32.834 [2024-11-28 02:31:06.269633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:32.834 pt4 00:15:32.834 [2024-11-28 02:31:06.269696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:32.834 [2024-11-28 02:31:06.269880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.834 "name": "raid_bdev1", 00:15:32.834 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:32.834 "strip_size_kb": 64, 00:15:32.834 "state": "online", 00:15:32.834 "raid_level": "raid5f", 00:15:32.834 "superblock": true, 00:15:32.834 "num_base_bdevs": 4, 00:15:32.834 "num_base_bdevs_discovered": 4, 00:15:32.834 "num_base_bdevs_operational": 4, 00:15:32.834 "base_bdevs_list": [ 00:15:32.834 { 00:15:32.834 "name": "pt1", 00:15:32.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.834 "is_configured": true, 00:15:32.834 "data_offset": 2048, 00:15:32.834 "data_size": 63488 00:15:32.834 }, 00:15:32.834 { 00:15:32.834 "name": "pt2", 00:15:32.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.834 "is_configured": true, 00:15:32.834 "data_offset": 2048, 00:15:32.834 "data_size": 63488 00:15:32.834 }, 00:15:32.834 { 00:15:32.834 "name": "pt3", 00:15:32.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:32.834 "is_configured": true, 00:15:32.834 "data_offset": 2048, 00:15:32.834 "data_size": 63488 00:15:32.834 }, 00:15:32.834 { 00:15:32.834 "name": "pt4", 00:15:32.834 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:32.834 "is_configured": true, 00:15:32.834 "data_offset": 2048, 00:15:32.834 "data_size": 63488 00:15:32.834 } 00:15:32.834 ] 00:15:32.834 }' 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.834 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.094 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:33.094 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:33.094 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.094 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.094 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.094 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.094 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.094 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.094 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.094 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.094 [2024-11-28 02:31:06.705601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.095 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.095 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.095 "name": "raid_bdev1", 00:15:33.095 "aliases": [ 00:15:33.095 "272a793b-42eb-49b7-873a-44d5e7a8ca42" 00:15:33.095 ], 00:15:33.095 "product_name": "Raid Volume", 00:15:33.095 "block_size": 512, 00:15:33.095 "num_blocks": 190464, 00:15:33.095 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:33.095 "assigned_rate_limits": { 00:15:33.095 "rw_ios_per_sec": 0, 00:15:33.095 "rw_mbytes_per_sec": 0, 00:15:33.095 "r_mbytes_per_sec": 0, 00:15:33.095 "w_mbytes_per_sec": 0 00:15:33.095 }, 00:15:33.095 "claimed": false, 00:15:33.095 "zoned": false, 00:15:33.095 "supported_io_types": { 00:15:33.095 "read": true, 00:15:33.095 "write": true, 00:15:33.095 "unmap": false, 00:15:33.095 "flush": false, 00:15:33.095 "reset": true, 00:15:33.095 "nvme_admin": false, 00:15:33.095 "nvme_io": false, 00:15:33.095 "nvme_io_md": false, 00:15:33.095 "write_zeroes": true, 00:15:33.095 "zcopy": false, 00:15:33.095 "get_zone_info": false, 00:15:33.095 "zone_management": false, 00:15:33.095 "zone_append": false, 00:15:33.095 "compare": false, 00:15:33.095 "compare_and_write": false, 00:15:33.095 "abort": false, 00:15:33.095 "seek_hole": false, 00:15:33.095 "seek_data": false, 00:15:33.095 "copy": false, 00:15:33.095 "nvme_iov_md": false 00:15:33.095 }, 00:15:33.095 "driver_specific": { 00:15:33.095 "raid": { 00:15:33.095 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:33.095 "strip_size_kb": 64, 00:15:33.095 "state": "online", 00:15:33.095 "raid_level": "raid5f", 00:15:33.095 "superblock": true, 00:15:33.095 "num_base_bdevs": 4, 00:15:33.095 "num_base_bdevs_discovered": 4, 00:15:33.095 "num_base_bdevs_operational": 4, 00:15:33.095 "base_bdevs_list": [ 00:15:33.095 { 00:15:33.095 "name": "pt1", 00:15:33.095 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.095 "is_configured": true, 00:15:33.095 "data_offset": 2048, 00:15:33.095 "data_size": 63488 00:15:33.095 }, 00:15:33.095 { 00:15:33.095 "name": "pt2", 00:15:33.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.095 "is_configured": true, 00:15:33.095 "data_offset": 2048, 00:15:33.095 "data_size": 63488 00:15:33.095 }, 00:15:33.095 { 00:15:33.095 "name": "pt3", 00:15:33.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.095 "is_configured": true, 00:15:33.095 "data_offset": 2048, 00:15:33.095 "data_size": 63488 00:15:33.095 }, 00:15:33.095 { 00:15:33.095 "name": "pt4", 00:15:33.095 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:33.095 "is_configured": true, 00:15:33.095 "data_offset": 2048, 00:15:33.095 "data_size": 63488 00:15:33.095 } 00:15:33.095 ] 00:15:33.095 } 00:15:33.095 } 00:15:33.095 }' 00:15:33.095 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:33.355 pt2 00:15:33.355 pt3 00:15:33.355 pt4' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.355 02:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.355 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.355 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.355 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.355 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.355 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.355 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:33.355 [2024-11-28 02:31:07.025053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 272a793b-42eb-49b7-873a-44d5e7a8ca42 '!=' 272a793b-42eb-49b7-873a-44d5e7a8ca42 ']' 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.615 [2024-11-28 02:31:07.072824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.615 "name": "raid_bdev1", 00:15:33.615 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:33.615 "strip_size_kb": 64, 00:15:33.615 "state": "online", 00:15:33.615 "raid_level": "raid5f", 00:15:33.615 "superblock": true, 00:15:33.615 "num_base_bdevs": 4, 00:15:33.615 "num_base_bdevs_discovered": 3, 00:15:33.615 "num_base_bdevs_operational": 3, 00:15:33.615 "base_bdevs_list": [ 00:15:33.615 { 00:15:33.615 "name": null, 00:15:33.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.615 "is_configured": false, 00:15:33.615 "data_offset": 0, 00:15:33.615 "data_size": 63488 00:15:33.615 }, 00:15:33.615 { 00:15:33.615 "name": "pt2", 00:15:33.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.615 "is_configured": true, 00:15:33.615 "data_offset": 2048, 00:15:33.615 "data_size": 63488 00:15:33.615 }, 00:15:33.615 { 00:15:33.615 "name": "pt3", 00:15:33.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.615 "is_configured": true, 00:15:33.615 "data_offset": 2048, 00:15:33.615 "data_size": 63488 00:15:33.615 }, 00:15:33.615 { 00:15:33.615 "name": "pt4", 00:15:33.615 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:33.615 "is_configured": true, 00:15:33.615 "data_offset": 2048, 00:15:33.615 "data_size": 63488 00:15:33.615 } 00:15:33.615 ] 00:15:33.615 }' 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.615 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.875 [2024-11-28 02:31:07.508103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.875 [2024-11-28 02:31:07.508175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.875 [2024-11-28 02:31:07.508263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.875 [2024-11-28 02:31:07.508351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.875 [2024-11-28 02:31:07.508416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.875 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.136 [2024-11-28 02:31:07.587988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.136 [2024-11-28 02:31:07.588041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.136 [2024-11-28 02:31:07.588058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:34.136 [2024-11-28 02:31:07.588067] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.136 [2024-11-28 02:31:07.590179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.136 [2024-11-28 02:31:07.590205] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.136 [2024-11-28 02:31:07.590278] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:34.136 [2024-11-28 02:31:07.590327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.136 pt2 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.136 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.137 "name": "raid_bdev1", 00:15:34.137 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:34.137 "strip_size_kb": 64, 00:15:34.137 "state": "configuring", 00:15:34.137 "raid_level": "raid5f", 00:15:34.137 "superblock": true, 00:15:34.137 "num_base_bdevs": 4, 00:15:34.137 "num_base_bdevs_discovered": 1, 00:15:34.137 "num_base_bdevs_operational": 3, 00:15:34.137 "base_bdevs_list": [ 00:15:34.137 { 00:15:34.137 "name": null, 00:15:34.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.137 "is_configured": false, 00:15:34.137 "data_offset": 2048, 00:15:34.137 "data_size": 63488 00:15:34.137 }, 00:15:34.137 { 00:15:34.137 "name": "pt2", 00:15:34.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.137 "is_configured": true, 00:15:34.137 "data_offset": 2048, 00:15:34.137 "data_size": 63488 00:15:34.137 }, 00:15:34.137 { 00:15:34.137 "name": null, 00:15:34.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.137 "is_configured": false, 00:15:34.137 "data_offset": 2048, 00:15:34.137 "data_size": 63488 00:15:34.137 }, 00:15:34.137 { 00:15:34.137 "name": null, 00:15:34.137 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:34.137 "is_configured": false, 00:15:34.137 "data_offset": 2048, 00:15:34.137 "data_size": 63488 00:15:34.137 } 00:15:34.137 ] 00:15:34.137 }' 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.137 02:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 [2024-11-28 02:31:08.023242] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:34.397 [2024-11-28 02:31:08.023315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.397 [2024-11-28 02:31:08.023339] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:34.397 [2024-11-28 02:31:08.023348] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.397 [2024-11-28 02:31:08.023774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.397 [2024-11-28 02:31:08.023796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:34.397 [2024-11-28 02:31:08.023877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:34.397 [2024-11-28 02:31:08.023898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:34.397 pt3 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.397 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.397 "name": "raid_bdev1", 00:15:34.397 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:34.397 "strip_size_kb": 64, 00:15:34.397 "state": "configuring", 00:15:34.397 "raid_level": "raid5f", 00:15:34.397 "superblock": true, 00:15:34.397 "num_base_bdevs": 4, 00:15:34.397 "num_base_bdevs_discovered": 2, 00:15:34.397 "num_base_bdevs_operational": 3, 00:15:34.397 "base_bdevs_list": [ 00:15:34.397 { 00:15:34.397 "name": null, 00:15:34.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.397 "is_configured": false, 00:15:34.397 "data_offset": 2048, 00:15:34.397 "data_size": 63488 00:15:34.397 }, 00:15:34.397 { 00:15:34.397 "name": "pt2", 00:15:34.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.397 "is_configured": true, 00:15:34.397 "data_offset": 2048, 00:15:34.397 "data_size": 63488 00:15:34.397 }, 00:15:34.397 { 00:15:34.397 "name": "pt3", 00:15:34.398 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.398 "is_configured": true, 00:15:34.398 "data_offset": 2048, 00:15:34.398 "data_size": 63488 00:15:34.398 }, 00:15:34.398 { 00:15:34.398 "name": null, 00:15:34.398 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:34.398 "is_configured": false, 00:15:34.398 "data_offset": 2048, 00:15:34.398 "data_size": 63488 00:15:34.398 } 00:15:34.398 ] 00:15:34.398 }' 00:15:34.398 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.398 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.969 [2024-11-28 02:31:08.450513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:34.969 [2024-11-28 02:31:08.450565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.969 [2024-11-28 02:31:08.450585] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:34.969 [2024-11-28 02:31:08.450594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.969 [2024-11-28 02:31:08.451046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.969 [2024-11-28 02:31:08.451067] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:34.969 [2024-11-28 02:31:08.451142] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:34.969 [2024-11-28 02:31:08.451168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:34.969 [2024-11-28 02:31:08.451304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:34.969 [2024-11-28 02:31:08.451312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:34.969 [2024-11-28 02:31:08.451542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:34.969 [2024-11-28 02:31:08.458110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:34.969 [2024-11-28 02:31:08.458136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:34.969 [2024-11-28 02:31:08.458437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.969 pt4 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.969 "name": "raid_bdev1", 00:15:34.969 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:34.969 "strip_size_kb": 64, 00:15:34.969 "state": "online", 00:15:34.969 "raid_level": "raid5f", 00:15:34.969 "superblock": true, 00:15:34.969 "num_base_bdevs": 4, 00:15:34.969 "num_base_bdevs_discovered": 3, 00:15:34.969 "num_base_bdevs_operational": 3, 00:15:34.969 "base_bdevs_list": [ 00:15:34.969 { 00:15:34.969 "name": null, 00:15:34.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.969 "is_configured": false, 00:15:34.969 "data_offset": 2048, 00:15:34.969 "data_size": 63488 00:15:34.969 }, 00:15:34.969 { 00:15:34.969 "name": "pt2", 00:15:34.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.969 "is_configured": true, 00:15:34.969 "data_offset": 2048, 00:15:34.969 "data_size": 63488 00:15:34.969 }, 00:15:34.969 { 00:15:34.969 "name": "pt3", 00:15:34.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.969 "is_configured": true, 00:15:34.969 "data_offset": 2048, 00:15:34.969 "data_size": 63488 00:15:34.969 }, 00:15:34.969 { 00:15:34.969 "name": "pt4", 00:15:34.969 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:34.969 "is_configured": true, 00:15:34.969 "data_offset": 2048, 00:15:34.969 "data_size": 63488 00:15:34.969 } 00:15:34.969 ] 00:15:34.969 }' 00:15:34.969 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.970 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.229 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.229 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.229 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.229 [2024-11-28 02:31:08.858337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.229 [2024-11-28 02:31:08.858364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.229 [2024-11-28 02:31:08.858422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.229 [2024-11-28 02:31:08.858489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.229 [2024-11-28 02:31:08.858504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:35.229 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.229 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.229 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:35.229 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.230 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.230 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.490 [2024-11-28 02:31:08.930229] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.490 [2024-11-28 02:31:08.930279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.490 [2024-11-28 02:31:08.930300] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:35.490 [2024-11-28 02:31:08.930312] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.490 [2024-11-28 02:31:08.932430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.490 [2024-11-28 02:31:08.932466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.490 [2024-11-28 02:31:08.932535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:35.490 [2024-11-28 02:31:08.932579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.490 [2024-11-28 02:31:08.932716] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:35.490 [2024-11-28 02:31:08.932736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.490 [2024-11-28 02:31:08.932749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:35.490 [2024-11-28 02:31:08.932820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.490 [2024-11-28 02:31:08.932933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:35.490 pt1 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.490 "name": "raid_bdev1", 00:15:35.490 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:35.490 "strip_size_kb": 64, 00:15:35.490 "state": "configuring", 00:15:35.490 "raid_level": "raid5f", 00:15:35.490 "superblock": true, 00:15:35.490 "num_base_bdevs": 4, 00:15:35.490 "num_base_bdevs_discovered": 2, 00:15:35.490 "num_base_bdevs_operational": 3, 00:15:35.490 "base_bdevs_list": [ 00:15:35.490 { 00:15:35.490 "name": null, 00:15:35.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.490 "is_configured": false, 00:15:35.490 "data_offset": 2048, 00:15:35.490 "data_size": 63488 00:15:35.490 }, 00:15:35.490 { 00:15:35.490 "name": "pt2", 00:15:35.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.490 "is_configured": true, 00:15:35.490 "data_offset": 2048, 00:15:35.490 "data_size": 63488 00:15:35.490 }, 00:15:35.490 { 00:15:35.490 "name": "pt3", 00:15:35.490 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.490 "is_configured": true, 00:15:35.490 "data_offset": 2048, 00:15:35.490 "data_size": 63488 00:15:35.490 }, 00:15:35.490 { 00:15:35.490 "name": null, 00:15:35.490 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:35.490 "is_configured": false, 00:15:35.490 "data_offset": 2048, 00:15:35.490 "data_size": 63488 00:15:35.490 } 00:15:35.490 ] 00:15:35.490 }' 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.490 02:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.750 [2024-11-28 02:31:09.357537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:35.750 [2024-11-28 02:31:09.357591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.750 [2024-11-28 02:31:09.357613] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:35.750 [2024-11-28 02:31:09.357640] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.750 [2024-11-28 02:31:09.358082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.750 [2024-11-28 02:31:09.358109] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:35.750 [2024-11-28 02:31:09.358188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:35.750 [2024-11-28 02:31:09.358211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:35.750 [2024-11-28 02:31:09.358348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:35.750 [2024-11-28 02:31:09.358362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:35.750 [2024-11-28 02:31:09.358607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:35.750 [2024-11-28 02:31:09.365662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:35.750 [2024-11-28 02:31:09.365689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:35.750 [2024-11-28 02:31:09.365981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.750 pt4 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.750 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.750 "name": "raid_bdev1", 00:15:35.750 "uuid": "272a793b-42eb-49b7-873a-44d5e7a8ca42", 00:15:35.750 "strip_size_kb": 64, 00:15:35.750 "state": "online", 00:15:35.750 "raid_level": "raid5f", 00:15:35.750 "superblock": true, 00:15:35.750 "num_base_bdevs": 4, 00:15:35.750 "num_base_bdevs_discovered": 3, 00:15:35.750 "num_base_bdevs_operational": 3, 00:15:35.750 "base_bdevs_list": [ 00:15:35.750 { 00:15:35.750 "name": null, 00:15:35.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.750 "is_configured": false, 00:15:35.750 "data_offset": 2048, 00:15:35.750 "data_size": 63488 00:15:35.750 }, 00:15:35.750 { 00:15:35.750 "name": "pt2", 00:15:35.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.751 "is_configured": true, 00:15:35.751 "data_offset": 2048, 00:15:35.751 "data_size": 63488 00:15:35.751 }, 00:15:35.751 { 00:15:35.751 "name": "pt3", 00:15:35.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.751 "is_configured": true, 00:15:35.751 "data_offset": 2048, 00:15:35.751 "data_size": 63488 00:15:35.751 }, 00:15:35.751 { 00:15:35.751 "name": "pt4", 00:15:35.751 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:35.751 "is_configured": true, 00:15:35.751 "data_offset": 2048, 00:15:35.751 "data_size": 63488 00:15:35.751 } 00:15:35.751 ] 00:15:35.751 }' 00:15:35.751 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.751 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.322 [2024-11-28 02:31:09.842060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 272a793b-42eb-49b7-873a-44d5e7a8ca42 '!=' 272a793b-42eb-49b7-873a-44d5e7a8ca42 ']' 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83854 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83854 ']' 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83854 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83854 00:15:36.322 killing process with pid 83854 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83854' 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83854 00:15:36.322 [2024-11-28 02:31:09.910405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.322 [2024-11-28 02:31:09.910476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.322 [2024-11-28 02:31:09.910544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.322 [2024-11-28 02:31:09.910558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:36.322 02:31:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83854 00:15:36.891 [2024-11-28 02:31:10.280377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:37.833 02:31:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:37.833 00:15:37.833 real 0m8.118s 00:15:37.833 user 0m12.737s 00:15:37.833 sys 0m1.472s 00:15:37.833 02:31:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.833 02:31:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.833 ************************************ 00:15:37.833 END TEST raid5f_superblock_test 00:15:37.833 ************************************ 00:15:37.833 02:31:11 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:37.833 02:31:11 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:37.833 02:31:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:37.833 02:31:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.833 02:31:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:37.833 ************************************ 00:15:37.833 START TEST raid5f_rebuild_test 00:15:37.833 ************************************ 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84341 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84341 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84341 ']' 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.833 02:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.833 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:37.833 Zero copy mechanism will not be used. 00:15:37.833 [2024-11-28 02:31:11.489341] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:37.833 [2024-11-28 02:31:11.489478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84341 ] 00:15:38.093 [2024-11-28 02:31:11.662724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.093 [2024-11-28 02:31:11.767092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.353 [2024-11-28 02:31:11.962130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.353 [2024-11-28 02:31:11.962183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 BaseBdev1_malloc 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 [2024-11-28 02:31:12.346899] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:38.925 [2024-11-28 02:31:12.346978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.925 [2024-11-28 02:31:12.347000] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:38.925 [2024-11-28 02:31:12.347011] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.925 [2024-11-28 02:31:12.348992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.925 [2024-11-28 02:31:12.349028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:38.925 BaseBdev1 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 BaseBdev2_malloc 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 [2024-11-28 02:31:12.401867] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:38.925 [2024-11-28 02:31:12.401934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.925 [2024-11-28 02:31:12.401957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:38.925 [2024-11-28 02:31:12.401968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.925 [2024-11-28 02:31:12.403998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.925 [2024-11-28 02:31:12.404040] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:38.925 BaseBdev2 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 BaseBdev3_malloc 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 [2024-11-28 02:31:12.466004] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:38.925 [2024-11-28 02:31:12.466070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.925 [2024-11-28 02:31:12.466092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:38.925 [2024-11-28 02:31:12.466102] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.925 [2024-11-28 02:31:12.468166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.925 [2024-11-28 02:31:12.468206] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:38.925 BaseBdev3 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 BaseBdev4_malloc 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 [2024-11-28 02:31:12.521409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:38.925 [2024-11-28 02:31:12.521481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.925 [2024-11-28 02:31:12.521503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:38.925 [2024-11-28 02:31:12.521514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.925 [2024-11-28 02:31:12.523525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.925 [2024-11-28 02:31:12.523564] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:38.925 BaseBdev4 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 spare_malloc 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 spare_delay 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 [2024-11-28 02:31:12.587145] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:38.925 [2024-11-28 02:31:12.587205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.925 [2024-11-28 02:31:12.587237] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:38.925 [2024-11-28 02:31:12.587248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.925 [2024-11-28 02:31:12.589201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.925 [2024-11-28 02:31:12.589253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:38.925 spare 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.925 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.925 [2024-11-28 02:31:12.599173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.925 [2024-11-28 02:31:12.600970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.925 [2024-11-28 02:31:12.601048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.925 [2024-11-28 02:31:12.601098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.925 [2024-11-28 02:31:12.601182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:38.925 [2024-11-28 02:31:12.601193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:38.926 [2024-11-28 02:31:12.601455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:39.186 [2024-11-28 02:31:12.608907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:39.186 [2024-11-28 02:31:12.608939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:39.186 [2024-11-28 02:31:12.609144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.186 "name": "raid_bdev1", 00:15:39.186 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:39.186 "strip_size_kb": 64, 00:15:39.186 "state": "online", 00:15:39.186 "raid_level": "raid5f", 00:15:39.186 "superblock": false, 00:15:39.186 "num_base_bdevs": 4, 00:15:39.186 "num_base_bdevs_discovered": 4, 00:15:39.186 "num_base_bdevs_operational": 4, 00:15:39.186 "base_bdevs_list": [ 00:15:39.186 { 00:15:39.186 "name": "BaseBdev1", 00:15:39.186 "uuid": "2b1864dc-4804-546a-99b7-f1ade7ba5c87", 00:15:39.186 "is_configured": true, 00:15:39.186 "data_offset": 0, 00:15:39.186 "data_size": 65536 00:15:39.186 }, 00:15:39.186 { 00:15:39.186 "name": "BaseBdev2", 00:15:39.186 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:39.186 "is_configured": true, 00:15:39.186 "data_offset": 0, 00:15:39.186 "data_size": 65536 00:15:39.186 }, 00:15:39.186 { 00:15:39.186 "name": "BaseBdev3", 00:15:39.186 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:39.186 "is_configured": true, 00:15:39.186 "data_offset": 0, 00:15:39.186 "data_size": 65536 00:15:39.186 }, 00:15:39.186 { 00:15:39.186 "name": "BaseBdev4", 00:15:39.186 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:39.186 "is_configured": true, 00:15:39.186 "data_offset": 0, 00:15:39.186 "data_size": 65536 00:15:39.186 } 00:15:39.186 ] 00:15:39.186 }' 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.186 02:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.447 [2024-11-28 02:31:13.060424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.447 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:39.707 [2024-11-28 02:31:13.295932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:39.707 /dev/nbd0 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.707 1+0 records in 00:15:39.707 1+0 records out 00:15:39.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363928 s, 11.3 MB/s 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:39.707 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:40.295 512+0 records in 00:15:40.295 512+0 records out 00:15:40.295 100663296 bytes (101 MB, 96 MiB) copied, 0.453478 s, 222 MB/s 00:15:40.296 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:40.296 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.296 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:40.296 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.296 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:40.296 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.296 02:31:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.556 [2024-11-28 02:31:14.029980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.556 [2024-11-28 02:31:14.043909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.556 "name": "raid_bdev1", 00:15:40.556 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:40.556 "strip_size_kb": 64, 00:15:40.556 "state": "online", 00:15:40.556 "raid_level": "raid5f", 00:15:40.556 "superblock": false, 00:15:40.556 "num_base_bdevs": 4, 00:15:40.556 "num_base_bdevs_discovered": 3, 00:15:40.556 "num_base_bdevs_operational": 3, 00:15:40.556 "base_bdevs_list": [ 00:15:40.556 { 00:15:40.556 "name": null, 00:15:40.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.556 "is_configured": false, 00:15:40.556 "data_offset": 0, 00:15:40.556 "data_size": 65536 00:15:40.556 }, 00:15:40.556 { 00:15:40.556 "name": "BaseBdev2", 00:15:40.556 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:40.556 "is_configured": true, 00:15:40.556 "data_offset": 0, 00:15:40.556 "data_size": 65536 00:15:40.556 }, 00:15:40.556 { 00:15:40.556 "name": "BaseBdev3", 00:15:40.556 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:40.556 "is_configured": true, 00:15:40.556 "data_offset": 0, 00:15:40.556 "data_size": 65536 00:15:40.556 }, 00:15:40.556 { 00:15:40.556 "name": "BaseBdev4", 00:15:40.556 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:40.556 "is_configured": true, 00:15:40.556 "data_offset": 0, 00:15:40.556 "data_size": 65536 00:15:40.556 } 00:15:40.556 ] 00:15:40.556 }' 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.556 02:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.816 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.816 02:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.816 02:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.076 [2024-11-28 02:31:14.495109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.076 [2024-11-28 02:31:14.510819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:41.076 02:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.076 02:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:41.076 [2024-11-28 02:31:14.519471] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.032 "name": "raid_bdev1", 00:15:42.032 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:42.032 "strip_size_kb": 64, 00:15:42.032 "state": "online", 00:15:42.032 "raid_level": "raid5f", 00:15:42.032 "superblock": false, 00:15:42.032 "num_base_bdevs": 4, 00:15:42.032 "num_base_bdevs_discovered": 4, 00:15:42.032 "num_base_bdevs_operational": 4, 00:15:42.032 "process": { 00:15:42.032 "type": "rebuild", 00:15:42.032 "target": "spare", 00:15:42.032 "progress": { 00:15:42.032 "blocks": 19200, 00:15:42.032 "percent": 9 00:15:42.032 } 00:15:42.032 }, 00:15:42.032 "base_bdevs_list": [ 00:15:42.032 { 00:15:42.032 "name": "spare", 00:15:42.032 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:42.032 "is_configured": true, 00:15:42.032 "data_offset": 0, 00:15:42.032 "data_size": 65536 00:15:42.032 }, 00:15:42.032 { 00:15:42.032 "name": "BaseBdev2", 00:15:42.032 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:42.032 "is_configured": true, 00:15:42.032 "data_offset": 0, 00:15:42.032 "data_size": 65536 00:15:42.032 }, 00:15:42.032 { 00:15:42.032 "name": "BaseBdev3", 00:15:42.032 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:42.032 "is_configured": true, 00:15:42.032 "data_offset": 0, 00:15:42.032 "data_size": 65536 00:15:42.032 }, 00:15:42.032 { 00:15:42.032 "name": "BaseBdev4", 00:15:42.032 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:42.032 "is_configured": true, 00:15:42.032 "data_offset": 0, 00:15:42.032 "data_size": 65536 00:15:42.032 } 00:15:42.032 ] 00:15:42.032 }' 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.032 02:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.032 [2024-11-28 02:31:15.642169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.292 [2024-11-28 02:31:15.725125] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.292 [2024-11-28 02:31:15.725205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.292 [2024-11-28 02:31:15.725221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.292 [2024-11-28 02:31:15.725234] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.292 "name": "raid_bdev1", 00:15:42.292 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:42.292 "strip_size_kb": 64, 00:15:42.292 "state": "online", 00:15:42.292 "raid_level": "raid5f", 00:15:42.292 "superblock": false, 00:15:42.292 "num_base_bdevs": 4, 00:15:42.292 "num_base_bdevs_discovered": 3, 00:15:42.292 "num_base_bdevs_operational": 3, 00:15:42.292 "base_bdevs_list": [ 00:15:42.292 { 00:15:42.292 "name": null, 00:15:42.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.292 "is_configured": false, 00:15:42.292 "data_offset": 0, 00:15:42.292 "data_size": 65536 00:15:42.292 }, 00:15:42.292 { 00:15:42.292 "name": "BaseBdev2", 00:15:42.292 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:42.292 "is_configured": true, 00:15:42.292 "data_offset": 0, 00:15:42.292 "data_size": 65536 00:15:42.292 }, 00:15:42.292 { 00:15:42.292 "name": "BaseBdev3", 00:15:42.292 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:42.292 "is_configured": true, 00:15:42.292 "data_offset": 0, 00:15:42.292 "data_size": 65536 00:15:42.292 }, 00:15:42.292 { 00:15:42.292 "name": "BaseBdev4", 00:15:42.292 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:42.292 "is_configured": true, 00:15:42.292 "data_offset": 0, 00:15:42.292 "data_size": 65536 00:15:42.292 } 00:15:42.292 ] 00:15:42.292 }' 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.292 02:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.552 "name": "raid_bdev1", 00:15:42.552 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:42.552 "strip_size_kb": 64, 00:15:42.552 "state": "online", 00:15:42.552 "raid_level": "raid5f", 00:15:42.552 "superblock": false, 00:15:42.552 "num_base_bdevs": 4, 00:15:42.552 "num_base_bdevs_discovered": 3, 00:15:42.552 "num_base_bdevs_operational": 3, 00:15:42.552 "base_bdevs_list": [ 00:15:42.552 { 00:15:42.552 "name": null, 00:15:42.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.552 "is_configured": false, 00:15:42.552 "data_offset": 0, 00:15:42.552 "data_size": 65536 00:15:42.552 }, 00:15:42.552 { 00:15:42.552 "name": "BaseBdev2", 00:15:42.552 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:42.552 "is_configured": true, 00:15:42.552 "data_offset": 0, 00:15:42.552 "data_size": 65536 00:15:42.552 }, 00:15:42.552 { 00:15:42.552 "name": "BaseBdev3", 00:15:42.552 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:42.552 "is_configured": true, 00:15:42.552 "data_offset": 0, 00:15:42.552 "data_size": 65536 00:15:42.552 }, 00:15:42.552 { 00:15:42.552 "name": "BaseBdev4", 00:15:42.552 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:42.552 "is_configured": true, 00:15:42.552 "data_offset": 0, 00:15:42.552 "data_size": 65536 00:15:42.552 } 00:15:42.552 ] 00:15:42.552 }' 00:15:42.552 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.812 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.812 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.812 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.812 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.812 02:31:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.812 02:31:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.812 [2024-11-28 02:31:16.293318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.812 [2024-11-28 02:31:16.307595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:15:42.812 02:31:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.812 02:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:42.812 [2024-11-28 02:31:16.316676] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.752 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.752 "name": "raid_bdev1", 00:15:43.752 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:43.752 "strip_size_kb": 64, 00:15:43.752 "state": "online", 00:15:43.752 "raid_level": "raid5f", 00:15:43.752 "superblock": false, 00:15:43.752 "num_base_bdevs": 4, 00:15:43.752 "num_base_bdevs_discovered": 4, 00:15:43.752 "num_base_bdevs_operational": 4, 00:15:43.753 "process": { 00:15:43.753 "type": "rebuild", 00:15:43.753 "target": "spare", 00:15:43.753 "progress": { 00:15:43.753 "blocks": 17280, 00:15:43.753 "percent": 8 00:15:43.753 } 00:15:43.753 }, 00:15:43.753 "base_bdevs_list": [ 00:15:43.753 { 00:15:43.753 "name": "spare", 00:15:43.753 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:43.753 "is_configured": true, 00:15:43.753 "data_offset": 0, 00:15:43.753 "data_size": 65536 00:15:43.753 }, 00:15:43.753 { 00:15:43.753 "name": "BaseBdev2", 00:15:43.753 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:43.753 "is_configured": true, 00:15:43.753 "data_offset": 0, 00:15:43.753 "data_size": 65536 00:15:43.753 }, 00:15:43.753 { 00:15:43.753 "name": "BaseBdev3", 00:15:43.753 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:43.753 "is_configured": true, 00:15:43.753 "data_offset": 0, 00:15:43.753 "data_size": 65536 00:15:43.753 }, 00:15:43.753 { 00:15:43.753 "name": "BaseBdev4", 00:15:43.753 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:43.753 "is_configured": true, 00:15:43.753 "data_offset": 0, 00:15:43.753 "data_size": 65536 00:15:43.753 } 00:15:43.753 ] 00:15:43.753 }' 00:15:43.753 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.753 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.753 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.013 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.013 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:44.013 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=607 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.014 "name": "raid_bdev1", 00:15:44.014 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:44.014 "strip_size_kb": 64, 00:15:44.014 "state": "online", 00:15:44.014 "raid_level": "raid5f", 00:15:44.014 "superblock": false, 00:15:44.014 "num_base_bdevs": 4, 00:15:44.014 "num_base_bdevs_discovered": 4, 00:15:44.014 "num_base_bdevs_operational": 4, 00:15:44.014 "process": { 00:15:44.014 "type": "rebuild", 00:15:44.014 "target": "spare", 00:15:44.014 "progress": { 00:15:44.014 "blocks": 21120, 00:15:44.014 "percent": 10 00:15:44.014 } 00:15:44.014 }, 00:15:44.014 "base_bdevs_list": [ 00:15:44.014 { 00:15:44.014 "name": "spare", 00:15:44.014 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:44.014 "is_configured": true, 00:15:44.014 "data_offset": 0, 00:15:44.014 "data_size": 65536 00:15:44.014 }, 00:15:44.014 { 00:15:44.014 "name": "BaseBdev2", 00:15:44.014 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:44.014 "is_configured": true, 00:15:44.014 "data_offset": 0, 00:15:44.014 "data_size": 65536 00:15:44.014 }, 00:15:44.014 { 00:15:44.014 "name": "BaseBdev3", 00:15:44.014 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:44.014 "is_configured": true, 00:15:44.014 "data_offset": 0, 00:15:44.014 "data_size": 65536 00:15:44.014 }, 00:15:44.014 { 00:15:44.014 "name": "BaseBdev4", 00:15:44.014 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:44.014 "is_configured": true, 00:15:44.014 "data_offset": 0, 00:15:44.014 "data_size": 65536 00:15:44.014 } 00:15:44.014 ] 00:15:44.014 }' 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.014 02:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.953 02:31:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.212 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.212 "name": "raid_bdev1", 00:15:45.212 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:45.212 "strip_size_kb": 64, 00:15:45.212 "state": "online", 00:15:45.212 "raid_level": "raid5f", 00:15:45.212 "superblock": false, 00:15:45.212 "num_base_bdevs": 4, 00:15:45.212 "num_base_bdevs_discovered": 4, 00:15:45.212 "num_base_bdevs_operational": 4, 00:15:45.212 "process": { 00:15:45.212 "type": "rebuild", 00:15:45.212 "target": "spare", 00:15:45.212 "progress": { 00:15:45.212 "blocks": 42240, 00:15:45.212 "percent": 21 00:15:45.212 } 00:15:45.212 }, 00:15:45.212 "base_bdevs_list": [ 00:15:45.212 { 00:15:45.212 "name": "spare", 00:15:45.212 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:45.212 "is_configured": true, 00:15:45.212 "data_offset": 0, 00:15:45.212 "data_size": 65536 00:15:45.212 }, 00:15:45.212 { 00:15:45.212 "name": "BaseBdev2", 00:15:45.212 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:45.212 "is_configured": true, 00:15:45.212 "data_offset": 0, 00:15:45.212 "data_size": 65536 00:15:45.212 }, 00:15:45.212 { 00:15:45.212 "name": "BaseBdev3", 00:15:45.212 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:45.212 "is_configured": true, 00:15:45.212 "data_offset": 0, 00:15:45.212 "data_size": 65536 00:15:45.212 }, 00:15:45.212 { 00:15:45.212 "name": "BaseBdev4", 00:15:45.212 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:45.212 "is_configured": true, 00:15:45.212 "data_offset": 0, 00:15:45.212 "data_size": 65536 00:15:45.212 } 00:15:45.212 ] 00:15:45.212 }' 00:15:45.212 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.212 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.212 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.212 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.212 02:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.150 "name": "raid_bdev1", 00:15:46.150 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:46.150 "strip_size_kb": 64, 00:15:46.150 "state": "online", 00:15:46.150 "raid_level": "raid5f", 00:15:46.150 "superblock": false, 00:15:46.150 "num_base_bdevs": 4, 00:15:46.150 "num_base_bdevs_discovered": 4, 00:15:46.150 "num_base_bdevs_operational": 4, 00:15:46.150 "process": { 00:15:46.150 "type": "rebuild", 00:15:46.150 "target": "spare", 00:15:46.150 "progress": { 00:15:46.150 "blocks": 65280, 00:15:46.150 "percent": 33 00:15:46.150 } 00:15:46.150 }, 00:15:46.150 "base_bdevs_list": [ 00:15:46.150 { 00:15:46.150 "name": "spare", 00:15:46.150 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:46.150 "is_configured": true, 00:15:46.150 "data_offset": 0, 00:15:46.150 "data_size": 65536 00:15:46.150 }, 00:15:46.150 { 00:15:46.150 "name": "BaseBdev2", 00:15:46.150 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:46.150 "is_configured": true, 00:15:46.150 "data_offset": 0, 00:15:46.150 "data_size": 65536 00:15:46.150 }, 00:15:46.150 { 00:15:46.150 "name": "BaseBdev3", 00:15:46.150 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:46.150 "is_configured": true, 00:15:46.150 "data_offset": 0, 00:15:46.150 "data_size": 65536 00:15:46.150 }, 00:15:46.150 { 00:15:46.150 "name": "BaseBdev4", 00:15:46.150 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:46.150 "is_configured": true, 00:15:46.150 "data_offset": 0, 00:15:46.150 "data_size": 65536 00:15:46.150 } 00:15:46.150 ] 00:15:46.150 }' 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.150 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.151 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.410 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.410 02:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.350 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.350 "name": "raid_bdev1", 00:15:47.350 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:47.350 "strip_size_kb": 64, 00:15:47.350 "state": "online", 00:15:47.350 "raid_level": "raid5f", 00:15:47.350 "superblock": false, 00:15:47.350 "num_base_bdevs": 4, 00:15:47.350 "num_base_bdevs_discovered": 4, 00:15:47.350 "num_base_bdevs_operational": 4, 00:15:47.350 "process": { 00:15:47.350 "type": "rebuild", 00:15:47.350 "target": "spare", 00:15:47.351 "progress": { 00:15:47.351 "blocks": 86400, 00:15:47.351 "percent": 43 00:15:47.351 } 00:15:47.351 }, 00:15:47.351 "base_bdevs_list": [ 00:15:47.351 { 00:15:47.351 "name": "spare", 00:15:47.351 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:47.351 "is_configured": true, 00:15:47.351 "data_offset": 0, 00:15:47.351 "data_size": 65536 00:15:47.351 }, 00:15:47.351 { 00:15:47.351 "name": "BaseBdev2", 00:15:47.351 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:47.351 "is_configured": true, 00:15:47.351 "data_offset": 0, 00:15:47.351 "data_size": 65536 00:15:47.351 }, 00:15:47.351 { 00:15:47.351 "name": "BaseBdev3", 00:15:47.351 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:47.351 "is_configured": true, 00:15:47.351 "data_offset": 0, 00:15:47.351 "data_size": 65536 00:15:47.351 }, 00:15:47.351 { 00:15:47.351 "name": "BaseBdev4", 00:15:47.351 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:47.351 "is_configured": true, 00:15:47.351 "data_offset": 0, 00:15:47.351 "data_size": 65536 00:15:47.351 } 00:15:47.351 ] 00:15:47.351 }' 00:15:47.351 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.351 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.351 02:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.351 02:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.351 02:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.731 "name": "raid_bdev1", 00:15:48.731 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:48.731 "strip_size_kb": 64, 00:15:48.731 "state": "online", 00:15:48.731 "raid_level": "raid5f", 00:15:48.731 "superblock": false, 00:15:48.731 "num_base_bdevs": 4, 00:15:48.731 "num_base_bdevs_discovered": 4, 00:15:48.731 "num_base_bdevs_operational": 4, 00:15:48.731 "process": { 00:15:48.731 "type": "rebuild", 00:15:48.731 "target": "spare", 00:15:48.731 "progress": { 00:15:48.731 "blocks": 107520, 00:15:48.731 "percent": 54 00:15:48.731 } 00:15:48.731 }, 00:15:48.731 "base_bdevs_list": [ 00:15:48.731 { 00:15:48.731 "name": "spare", 00:15:48.731 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:48.731 "is_configured": true, 00:15:48.731 "data_offset": 0, 00:15:48.731 "data_size": 65536 00:15:48.731 }, 00:15:48.731 { 00:15:48.731 "name": "BaseBdev2", 00:15:48.731 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:48.731 "is_configured": true, 00:15:48.731 "data_offset": 0, 00:15:48.731 "data_size": 65536 00:15:48.731 }, 00:15:48.731 { 00:15:48.731 "name": "BaseBdev3", 00:15:48.731 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:48.731 "is_configured": true, 00:15:48.731 "data_offset": 0, 00:15:48.731 "data_size": 65536 00:15:48.731 }, 00:15:48.731 { 00:15:48.731 "name": "BaseBdev4", 00:15:48.731 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:48.731 "is_configured": true, 00:15:48.731 "data_offset": 0, 00:15:48.731 "data_size": 65536 00:15:48.731 } 00:15:48.731 ] 00:15:48.731 }' 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.731 02:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.671 "name": "raid_bdev1", 00:15:49.671 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:49.671 "strip_size_kb": 64, 00:15:49.671 "state": "online", 00:15:49.671 "raid_level": "raid5f", 00:15:49.671 "superblock": false, 00:15:49.671 "num_base_bdevs": 4, 00:15:49.671 "num_base_bdevs_discovered": 4, 00:15:49.671 "num_base_bdevs_operational": 4, 00:15:49.671 "process": { 00:15:49.671 "type": "rebuild", 00:15:49.671 "target": "spare", 00:15:49.671 "progress": { 00:15:49.671 "blocks": 130560, 00:15:49.671 "percent": 66 00:15:49.671 } 00:15:49.671 }, 00:15:49.671 "base_bdevs_list": [ 00:15:49.671 { 00:15:49.671 "name": "spare", 00:15:49.671 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:49.671 "is_configured": true, 00:15:49.671 "data_offset": 0, 00:15:49.671 "data_size": 65536 00:15:49.671 }, 00:15:49.671 { 00:15:49.671 "name": "BaseBdev2", 00:15:49.671 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:49.671 "is_configured": true, 00:15:49.671 "data_offset": 0, 00:15:49.671 "data_size": 65536 00:15:49.671 }, 00:15:49.671 { 00:15:49.671 "name": "BaseBdev3", 00:15:49.671 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:49.671 "is_configured": true, 00:15:49.671 "data_offset": 0, 00:15:49.671 "data_size": 65536 00:15:49.671 }, 00:15:49.671 { 00:15:49.671 "name": "BaseBdev4", 00:15:49.671 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:49.671 "is_configured": true, 00:15:49.671 "data_offset": 0, 00:15:49.671 "data_size": 65536 00:15:49.671 } 00:15:49.671 ] 00:15:49.671 }' 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.671 02:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.053 "name": "raid_bdev1", 00:15:51.053 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:51.053 "strip_size_kb": 64, 00:15:51.053 "state": "online", 00:15:51.053 "raid_level": "raid5f", 00:15:51.053 "superblock": false, 00:15:51.053 "num_base_bdevs": 4, 00:15:51.053 "num_base_bdevs_discovered": 4, 00:15:51.053 "num_base_bdevs_operational": 4, 00:15:51.053 "process": { 00:15:51.053 "type": "rebuild", 00:15:51.053 "target": "spare", 00:15:51.053 "progress": { 00:15:51.053 "blocks": 151680, 00:15:51.053 "percent": 77 00:15:51.053 } 00:15:51.053 }, 00:15:51.053 "base_bdevs_list": [ 00:15:51.053 { 00:15:51.053 "name": "spare", 00:15:51.053 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:51.053 "is_configured": true, 00:15:51.053 "data_offset": 0, 00:15:51.053 "data_size": 65536 00:15:51.053 }, 00:15:51.053 { 00:15:51.053 "name": "BaseBdev2", 00:15:51.053 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:51.053 "is_configured": true, 00:15:51.053 "data_offset": 0, 00:15:51.053 "data_size": 65536 00:15:51.053 }, 00:15:51.053 { 00:15:51.053 "name": "BaseBdev3", 00:15:51.053 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:51.053 "is_configured": true, 00:15:51.053 "data_offset": 0, 00:15:51.053 "data_size": 65536 00:15:51.053 }, 00:15:51.053 { 00:15:51.053 "name": "BaseBdev4", 00:15:51.053 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:51.053 "is_configured": true, 00:15:51.053 "data_offset": 0, 00:15:51.053 "data_size": 65536 00:15:51.053 } 00:15:51.053 ] 00:15:51.053 }' 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.053 02:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.996 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.996 "name": "raid_bdev1", 00:15:51.996 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:51.996 "strip_size_kb": 64, 00:15:51.996 "state": "online", 00:15:51.996 "raid_level": "raid5f", 00:15:51.996 "superblock": false, 00:15:51.996 "num_base_bdevs": 4, 00:15:51.996 "num_base_bdevs_discovered": 4, 00:15:51.996 "num_base_bdevs_operational": 4, 00:15:51.996 "process": { 00:15:51.996 "type": "rebuild", 00:15:51.996 "target": "spare", 00:15:51.996 "progress": { 00:15:51.996 "blocks": 174720, 00:15:51.996 "percent": 88 00:15:51.996 } 00:15:51.996 }, 00:15:51.996 "base_bdevs_list": [ 00:15:51.996 { 00:15:51.996 "name": "spare", 00:15:51.996 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:51.996 "is_configured": true, 00:15:51.996 "data_offset": 0, 00:15:51.996 "data_size": 65536 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "name": "BaseBdev2", 00:15:51.996 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:51.996 "is_configured": true, 00:15:51.996 "data_offset": 0, 00:15:51.996 "data_size": 65536 00:15:51.997 }, 00:15:51.997 { 00:15:51.997 "name": "BaseBdev3", 00:15:51.997 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:51.997 "is_configured": true, 00:15:51.997 "data_offset": 0, 00:15:51.997 "data_size": 65536 00:15:51.997 }, 00:15:51.997 { 00:15:51.997 "name": "BaseBdev4", 00:15:51.997 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:51.997 "is_configured": true, 00:15:51.997 "data_offset": 0, 00:15:51.997 "data_size": 65536 00:15:51.997 } 00:15:51.997 ] 00:15:51.997 }' 00:15:51.997 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.997 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.997 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.997 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.997 02:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:52.935 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.935 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.935 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.935 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.935 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.195 [2024-11-28 02:31:26.660838] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:53.195 [2024-11-28 02:31:26.660906] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:53.195 [2024-11-28 02:31:26.660970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.195 "name": "raid_bdev1", 00:15:53.195 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:53.195 "strip_size_kb": 64, 00:15:53.195 "state": "online", 00:15:53.195 "raid_level": "raid5f", 00:15:53.195 "superblock": false, 00:15:53.195 "num_base_bdevs": 4, 00:15:53.195 "num_base_bdevs_discovered": 4, 00:15:53.195 "num_base_bdevs_operational": 4, 00:15:53.195 "process": { 00:15:53.195 "type": "rebuild", 00:15:53.195 "target": "spare", 00:15:53.195 "progress": { 00:15:53.195 "blocks": 195840, 00:15:53.195 "percent": 99 00:15:53.195 } 00:15:53.195 }, 00:15:53.195 "base_bdevs_list": [ 00:15:53.195 { 00:15:53.195 "name": "spare", 00:15:53.195 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:53.195 "is_configured": true, 00:15:53.195 "data_offset": 0, 00:15:53.195 "data_size": 65536 00:15:53.195 }, 00:15:53.195 { 00:15:53.195 "name": "BaseBdev2", 00:15:53.195 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:53.195 "is_configured": true, 00:15:53.195 "data_offset": 0, 00:15:53.195 "data_size": 65536 00:15:53.195 }, 00:15:53.195 { 00:15:53.195 "name": "BaseBdev3", 00:15:53.195 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:53.195 "is_configured": true, 00:15:53.195 "data_offset": 0, 00:15:53.195 "data_size": 65536 00:15:53.195 }, 00:15:53.195 { 00:15:53.195 "name": "BaseBdev4", 00:15:53.195 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:53.195 "is_configured": true, 00:15:53.195 "data_offset": 0, 00:15:53.195 "data_size": 65536 00:15:53.195 } 00:15:53.195 ] 00:15:53.195 }' 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.195 02:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.153 "name": "raid_bdev1", 00:15:54.153 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:54.153 "strip_size_kb": 64, 00:15:54.153 "state": "online", 00:15:54.153 "raid_level": "raid5f", 00:15:54.153 "superblock": false, 00:15:54.153 "num_base_bdevs": 4, 00:15:54.153 "num_base_bdevs_discovered": 4, 00:15:54.153 "num_base_bdevs_operational": 4, 00:15:54.153 "base_bdevs_list": [ 00:15:54.153 { 00:15:54.153 "name": "spare", 00:15:54.153 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:54.153 "is_configured": true, 00:15:54.153 "data_offset": 0, 00:15:54.153 "data_size": 65536 00:15:54.153 }, 00:15:54.153 { 00:15:54.153 "name": "BaseBdev2", 00:15:54.153 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:54.153 "is_configured": true, 00:15:54.153 "data_offset": 0, 00:15:54.153 "data_size": 65536 00:15:54.153 }, 00:15:54.153 { 00:15:54.153 "name": "BaseBdev3", 00:15:54.153 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:54.153 "is_configured": true, 00:15:54.153 "data_offset": 0, 00:15:54.153 "data_size": 65536 00:15:54.153 }, 00:15:54.153 { 00:15:54.153 "name": "BaseBdev4", 00:15:54.153 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:54.153 "is_configured": true, 00:15:54.153 "data_offset": 0, 00:15:54.153 "data_size": 65536 00:15:54.153 } 00:15:54.153 ] 00:15:54.153 }' 00:15:54.153 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.414 "name": "raid_bdev1", 00:15:54.414 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:54.414 "strip_size_kb": 64, 00:15:54.414 "state": "online", 00:15:54.414 "raid_level": "raid5f", 00:15:54.414 "superblock": false, 00:15:54.414 "num_base_bdevs": 4, 00:15:54.414 "num_base_bdevs_discovered": 4, 00:15:54.414 "num_base_bdevs_operational": 4, 00:15:54.414 "base_bdevs_list": [ 00:15:54.414 { 00:15:54.414 "name": "spare", 00:15:54.414 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:54.414 "is_configured": true, 00:15:54.414 "data_offset": 0, 00:15:54.414 "data_size": 65536 00:15:54.414 }, 00:15:54.414 { 00:15:54.414 "name": "BaseBdev2", 00:15:54.414 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:54.414 "is_configured": true, 00:15:54.414 "data_offset": 0, 00:15:54.414 "data_size": 65536 00:15:54.414 }, 00:15:54.414 { 00:15:54.414 "name": "BaseBdev3", 00:15:54.414 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:54.414 "is_configured": true, 00:15:54.414 "data_offset": 0, 00:15:54.414 "data_size": 65536 00:15:54.414 }, 00:15:54.414 { 00:15:54.414 "name": "BaseBdev4", 00:15:54.414 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:54.414 "is_configured": true, 00:15:54.414 "data_offset": 0, 00:15:54.414 "data_size": 65536 00:15:54.414 } 00:15:54.414 ] 00:15:54.414 }' 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.414 02:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.414 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.414 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.414 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.414 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.414 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.414 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.414 "name": "raid_bdev1", 00:15:54.414 "uuid": "2154ce18-735d-4e1e-8b73-41e6c9947de3", 00:15:54.414 "strip_size_kb": 64, 00:15:54.414 "state": "online", 00:15:54.414 "raid_level": "raid5f", 00:15:54.414 "superblock": false, 00:15:54.414 "num_base_bdevs": 4, 00:15:54.414 "num_base_bdevs_discovered": 4, 00:15:54.414 "num_base_bdevs_operational": 4, 00:15:54.414 "base_bdevs_list": [ 00:15:54.414 { 00:15:54.414 "name": "spare", 00:15:54.414 "uuid": "f9ead95b-c1b2-5484-8864-8ab9f6201b94", 00:15:54.414 "is_configured": true, 00:15:54.414 "data_offset": 0, 00:15:54.414 "data_size": 65536 00:15:54.414 }, 00:15:54.414 { 00:15:54.414 "name": "BaseBdev2", 00:15:54.414 "uuid": "ed6a5117-ec0c-58c4-8f44-38f7bd05a2a2", 00:15:54.414 "is_configured": true, 00:15:54.414 "data_offset": 0, 00:15:54.414 "data_size": 65536 00:15:54.414 }, 00:15:54.414 { 00:15:54.414 "name": "BaseBdev3", 00:15:54.414 "uuid": "e4f5b823-4da6-566d-ae72-d6610edce840", 00:15:54.414 "is_configured": true, 00:15:54.414 "data_offset": 0, 00:15:54.415 "data_size": 65536 00:15:54.415 }, 00:15:54.415 { 00:15:54.415 "name": "BaseBdev4", 00:15:54.415 "uuid": "ab6c819f-69f9-50df-806a-c6a7641689fb", 00:15:54.415 "is_configured": true, 00:15:54.415 "data_offset": 0, 00:15:54.415 "data_size": 65536 00:15:54.415 } 00:15:54.415 ] 00:15:54.415 }' 00:15:54.415 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.415 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.675 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:54.675 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.675 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.675 [2024-11-28 02:31:28.348375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.675 [2024-11-28 02:31:28.348407] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.675 [2024-11-28 02:31:28.348494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.675 [2024-11-28 02:31:28.348605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.675 [2024-11-28 02:31:28.348620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:54.675 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:54.935 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:54.935 /dev/nbd0 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.195 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.195 1+0 records in 00:15:55.196 1+0 records out 00:15:55.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154321 s, 26.5 MB/s 00:15:55.196 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.196 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:55.196 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.196 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.196 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:55.196 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.196 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:55.196 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:55.196 /dev/nbd1 00:15:55.196 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.456 1+0 records in 00:15:55.456 1+0 records out 00:15:55.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371988 s, 11.0 MB/s 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:55.456 02:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:55.456 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:55.456 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.456 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:55.456 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:55.456 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:55.456 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.456 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:55.716 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:55.716 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:55.716 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:55.716 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.716 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.716 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:55.716 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:55.716 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.716 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.716 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84341 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84341 ']' 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84341 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84341 00:15:55.976 killing process with pid 84341 00:15:55.976 Received shutdown signal, test time was about 60.000000 seconds 00:15:55.976 00:15:55.976 Latency(us) 00:15:55.976 [2024-11-28T02:31:29.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.976 [2024-11-28T02:31:29.655Z] =================================================================================================================== 00:15:55.976 [2024-11-28T02:31:29.655Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84341' 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84341 00:15:55.976 [2024-11-28 02:31:29.521389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.976 02:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84341 00:15:56.546 [2024-11-28 02:31:29.985219] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:57.485 00:15:57.485 real 0m19.635s 00:15:57.485 user 0m23.375s 00:15:57.485 sys 0m2.123s 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.485 ************************************ 00:15:57.485 END TEST raid5f_rebuild_test 00:15:57.485 ************************************ 00:15:57.485 02:31:31 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:57.485 02:31:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:57.485 02:31:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.485 02:31:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.485 ************************************ 00:15:57.485 START TEST raid5f_rebuild_test_sb 00:15:57.485 ************************************ 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:57.485 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84860 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84860 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84860 ']' 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.486 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.746 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:57.746 Zero copy mechanism will not be used. 00:15:57.746 [2024-11-28 02:31:31.196639] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:57.746 [2024-11-28 02:31:31.196772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84860 ] 00:15:57.746 [2024-11-28 02:31:31.370835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.005 [2024-11-28 02:31:31.471170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.006 [2024-11-28 02:31:31.661853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.006 [2024-11-28 02:31:31.661911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.576 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.576 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:58.576 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.576 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:58.576 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.576 02:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.576 BaseBdev1_malloc 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.576 [2024-11-28 02:31:32.045364] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:58.576 [2024-11-28 02:31:32.045428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.576 [2024-11-28 02:31:32.045466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.576 [2024-11-28 02:31:32.045477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.576 [2024-11-28 02:31:32.047425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.576 [2024-11-28 02:31:32.047463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:58.576 BaseBdev1 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.576 BaseBdev2_malloc 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.576 [2024-11-28 02:31:32.098799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:58.576 [2024-11-28 02:31:32.098872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.576 [2024-11-28 02:31:32.098893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:58.576 [2024-11-28 02:31:32.098904] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.576 [2024-11-28 02:31:32.100876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.576 [2024-11-28 02:31:32.100914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:58.576 BaseBdev2 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.576 BaseBdev3_malloc 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.576 [2024-11-28 02:31:32.184409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:58.576 [2024-11-28 02:31:32.184479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.576 [2024-11-28 02:31:32.184501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:58.576 [2024-11-28 02:31:32.184512] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.576 [2024-11-28 02:31:32.186467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.576 [2024-11-28 02:31:32.186500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:58.576 BaseBdev3 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.576 BaseBdev4_malloc 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.576 [2024-11-28 02:31:32.236023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:58.576 [2024-11-28 02:31:32.236084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.576 [2024-11-28 02:31:32.236120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:58.576 [2024-11-28 02:31:32.236131] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.576 [2024-11-28 02:31:32.238107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.576 [2024-11-28 02:31:32.238143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:58.576 BaseBdev4 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:58.576 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.577 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.837 spare_malloc 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.837 spare_delay 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.837 [2024-11-28 02:31:32.297013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:58.837 [2024-11-28 02:31:32.297062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.837 [2024-11-28 02:31:32.297093] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:58.837 [2024-11-28 02:31:32.297103] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.837 [2024-11-28 02:31:32.299083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.837 [2024-11-28 02:31:32.299118] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:58.837 spare 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.837 [2024-11-28 02:31:32.309050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.837 [2024-11-28 02:31:32.310752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.837 [2024-11-28 02:31:32.310839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.837 [2024-11-28 02:31:32.310887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:58.837 [2024-11-28 02:31:32.311073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:58.837 [2024-11-28 02:31:32.311090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:58.837 [2024-11-28 02:31:32.311332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:58.837 [2024-11-28 02:31:32.318069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:58.837 [2024-11-28 02:31:32.318093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:58.837 [2024-11-28 02:31:32.318270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.837 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.837 "name": "raid_bdev1", 00:15:58.837 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:15:58.837 "strip_size_kb": 64, 00:15:58.837 "state": "online", 00:15:58.837 "raid_level": "raid5f", 00:15:58.837 "superblock": true, 00:15:58.837 "num_base_bdevs": 4, 00:15:58.837 "num_base_bdevs_discovered": 4, 00:15:58.837 "num_base_bdevs_operational": 4, 00:15:58.837 "base_bdevs_list": [ 00:15:58.837 { 00:15:58.837 "name": "BaseBdev1", 00:15:58.837 "uuid": "521aae6e-9594-5b4c-986f-c7da538b71e3", 00:15:58.837 "is_configured": true, 00:15:58.837 "data_offset": 2048, 00:15:58.837 "data_size": 63488 00:15:58.837 }, 00:15:58.837 { 00:15:58.837 "name": "BaseBdev2", 00:15:58.837 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:15:58.837 "is_configured": true, 00:15:58.837 "data_offset": 2048, 00:15:58.837 "data_size": 63488 00:15:58.837 }, 00:15:58.837 { 00:15:58.838 "name": "BaseBdev3", 00:15:58.838 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:15:58.838 "is_configured": true, 00:15:58.838 "data_offset": 2048, 00:15:58.838 "data_size": 63488 00:15:58.838 }, 00:15:58.838 { 00:15:58.838 "name": "BaseBdev4", 00:15:58.838 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:15:58.838 "is_configured": true, 00:15:58.838 "data_offset": 2048, 00:15:58.838 "data_size": 63488 00:15:58.838 } 00:15:58.838 ] 00:15:58.838 }' 00:15:58.838 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.838 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.098 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:59.098 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.098 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.098 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.098 [2024-11-28 02:31:32.773719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:59.358 02:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:59.358 [2024-11-28 02:31:33.017173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:59.619 /dev/nbd0 00:15:59.619 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:59.619 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:59.619 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:59.619 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:59.619 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.619 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.619 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:59.619 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.620 1+0 records in 00:15:59.620 1+0 records out 00:15:59.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261973 s, 15.6 MB/s 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:59.620 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:59.880 496+0 records in 00:15:59.880 496+0 records out 00:15:59.880 97517568 bytes (98 MB, 93 MiB) copied, 0.44049 s, 221 MB/s 00:15:59.880 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:59.880 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.880 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:59.880 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:59.880 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:59.880 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.880 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.140 [2024-11-28 02:31:33.738661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.140 [2024-11-28 02:31:33.748733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.140 "name": "raid_bdev1", 00:16:00.140 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:00.140 "strip_size_kb": 64, 00:16:00.140 "state": "online", 00:16:00.140 "raid_level": "raid5f", 00:16:00.140 "superblock": true, 00:16:00.140 "num_base_bdevs": 4, 00:16:00.140 "num_base_bdevs_discovered": 3, 00:16:00.140 "num_base_bdevs_operational": 3, 00:16:00.140 "base_bdevs_list": [ 00:16:00.140 { 00:16:00.140 "name": null, 00:16:00.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.140 "is_configured": false, 00:16:00.140 "data_offset": 0, 00:16:00.140 "data_size": 63488 00:16:00.140 }, 00:16:00.140 { 00:16:00.140 "name": "BaseBdev2", 00:16:00.140 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:00.140 "is_configured": true, 00:16:00.140 "data_offset": 2048, 00:16:00.140 "data_size": 63488 00:16:00.140 }, 00:16:00.140 { 00:16:00.140 "name": "BaseBdev3", 00:16:00.140 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:00.140 "is_configured": true, 00:16:00.140 "data_offset": 2048, 00:16:00.140 "data_size": 63488 00:16:00.140 }, 00:16:00.140 { 00:16:00.140 "name": "BaseBdev4", 00:16:00.140 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:00.140 "is_configured": true, 00:16:00.140 "data_offset": 2048, 00:16:00.140 "data_size": 63488 00:16:00.140 } 00:16:00.140 ] 00:16:00.140 }' 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.140 02:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.710 02:31:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.710 02:31:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.710 02:31:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.710 [2024-11-28 02:31:34.160076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.710 [2024-11-28 02:31:34.175547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:00.710 02:31:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.710 02:31:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:00.710 [2024-11-28 02:31:34.184148] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.656 "name": "raid_bdev1", 00:16:01.656 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:01.656 "strip_size_kb": 64, 00:16:01.656 "state": "online", 00:16:01.656 "raid_level": "raid5f", 00:16:01.656 "superblock": true, 00:16:01.656 "num_base_bdevs": 4, 00:16:01.656 "num_base_bdevs_discovered": 4, 00:16:01.656 "num_base_bdevs_operational": 4, 00:16:01.656 "process": { 00:16:01.656 "type": "rebuild", 00:16:01.656 "target": "spare", 00:16:01.656 "progress": { 00:16:01.656 "blocks": 19200, 00:16:01.656 "percent": 10 00:16:01.656 } 00:16:01.656 }, 00:16:01.656 "base_bdevs_list": [ 00:16:01.656 { 00:16:01.656 "name": "spare", 00:16:01.656 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:01.656 "is_configured": true, 00:16:01.656 "data_offset": 2048, 00:16:01.656 "data_size": 63488 00:16:01.656 }, 00:16:01.656 { 00:16:01.656 "name": "BaseBdev2", 00:16:01.656 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:01.656 "is_configured": true, 00:16:01.656 "data_offset": 2048, 00:16:01.656 "data_size": 63488 00:16:01.656 }, 00:16:01.656 { 00:16:01.656 "name": "BaseBdev3", 00:16:01.656 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:01.656 "is_configured": true, 00:16:01.656 "data_offset": 2048, 00:16:01.656 "data_size": 63488 00:16:01.656 }, 00:16:01.656 { 00:16:01.656 "name": "BaseBdev4", 00:16:01.656 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:01.656 "is_configured": true, 00:16:01.656 "data_offset": 2048, 00:16:01.656 "data_size": 63488 00:16:01.656 } 00:16:01.656 ] 00:16:01.656 }' 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.656 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 [2024-11-28 02:31:35.294790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.916 [2024-11-28 02:31:35.389808] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:01.916 [2024-11-28 02:31:35.389869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.916 [2024-11-28 02:31:35.389902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.916 [2024-11-28 02:31:35.389910] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.916 "name": "raid_bdev1", 00:16:01.916 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:01.916 "strip_size_kb": 64, 00:16:01.916 "state": "online", 00:16:01.916 "raid_level": "raid5f", 00:16:01.916 "superblock": true, 00:16:01.916 "num_base_bdevs": 4, 00:16:01.916 "num_base_bdevs_discovered": 3, 00:16:01.916 "num_base_bdevs_operational": 3, 00:16:01.916 "base_bdevs_list": [ 00:16:01.916 { 00:16:01.916 "name": null, 00:16:01.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.916 "is_configured": false, 00:16:01.916 "data_offset": 0, 00:16:01.916 "data_size": 63488 00:16:01.916 }, 00:16:01.916 { 00:16:01.916 "name": "BaseBdev2", 00:16:01.916 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:01.916 "is_configured": true, 00:16:01.916 "data_offset": 2048, 00:16:01.916 "data_size": 63488 00:16:01.916 }, 00:16:01.916 { 00:16:01.916 "name": "BaseBdev3", 00:16:01.916 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:01.916 "is_configured": true, 00:16:01.916 "data_offset": 2048, 00:16:01.916 "data_size": 63488 00:16:01.916 }, 00:16:01.916 { 00:16:01.916 "name": "BaseBdev4", 00:16:01.916 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:01.916 "is_configured": true, 00:16:01.916 "data_offset": 2048, 00:16:01.916 "data_size": 63488 00:16:01.916 } 00:16:01.916 ] 00:16:01.916 }' 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.916 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.176 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.176 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.176 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.176 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.176 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.176 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.176 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.176 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.176 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.176 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.437 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.437 "name": "raid_bdev1", 00:16:02.437 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:02.437 "strip_size_kb": 64, 00:16:02.437 "state": "online", 00:16:02.437 "raid_level": "raid5f", 00:16:02.437 "superblock": true, 00:16:02.437 "num_base_bdevs": 4, 00:16:02.437 "num_base_bdevs_discovered": 3, 00:16:02.437 "num_base_bdevs_operational": 3, 00:16:02.437 "base_bdevs_list": [ 00:16:02.437 { 00:16:02.437 "name": null, 00:16:02.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.437 "is_configured": false, 00:16:02.437 "data_offset": 0, 00:16:02.437 "data_size": 63488 00:16:02.437 }, 00:16:02.437 { 00:16:02.437 "name": "BaseBdev2", 00:16:02.437 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:02.437 "is_configured": true, 00:16:02.437 "data_offset": 2048, 00:16:02.437 "data_size": 63488 00:16:02.437 }, 00:16:02.437 { 00:16:02.437 "name": "BaseBdev3", 00:16:02.437 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:02.437 "is_configured": true, 00:16:02.437 "data_offset": 2048, 00:16:02.437 "data_size": 63488 00:16:02.437 }, 00:16:02.437 { 00:16:02.437 "name": "BaseBdev4", 00:16:02.437 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:02.437 "is_configured": true, 00:16:02.437 "data_offset": 2048, 00:16:02.437 "data_size": 63488 00:16:02.437 } 00:16:02.437 ] 00:16:02.437 }' 00:16:02.437 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.437 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.437 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.437 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.437 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:02.437 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.437 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.437 [2024-11-28 02:31:35.941643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.437 [2024-11-28 02:31:35.955621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:02.437 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.437 02:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:02.437 [2024-11-28 02:31:35.964184] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.379 02:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.379 02:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.379 02:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.379 02:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.379 02:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.379 02:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.379 02:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.379 02:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.379 02:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.379 02:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.379 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.379 "name": "raid_bdev1", 00:16:03.379 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:03.379 "strip_size_kb": 64, 00:16:03.379 "state": "online", 00:16:03.379 "raid_level": "raid5f", 00:16:03.379 "superblock": true, 00:16:03.379 "num_base_bdevs": 4, 00:16:03.379 "num_base_bdevs_discovered": 4, 00:16:03.379 "num_base_bdevs_operational": 4, 00:16:03.379 "process": { 00:16:03.379 "type": "rebuild", 00:16:03.379 "target": "spare", 00:16:03.379 "progress": { 00:16:03.379 "blocks": 19200, 00:16:03.379 "percent": 10 00:16:03.379 } 00:16:03.379 }, 00:16:03.379 "base_bdevs_list": [ 00:16:03.379 { 00:16:03.379 "name": "spare", 00:16:03.379 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:03.379 "is_configured": true, 00:16:03.379 "data_offset": 2048, 00:16:03.379 "data_size": 63488 00:16:03.379 }, 00:16:03.379 { 00:16:03.379 "name": "BaseBdev2", 00:16:03.379 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:03.379 "is_configured": true, 00:16:03.379 "data_offset": 2048, 00:16:03.379 "data_size": 63488 00:16:03.379 }, 00:16:03.379 { 00:16:03.379 "name": "BaseBdev3", 00:16:03.379 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:03.379 "is_configured": true, 00:16:03.379 "data_offset": 2048, 00:16:03.379 "data_size": 63488 00:16:03.379 }, 00:16:03.379 { 00:16:03.379 "name": "BaseBdev4", 00:16:03.379 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:03.379 "is_configured": true, 00:16:03.379 "data_offset": 2048, 00:16:03.379 "data_size": 63488 00:16:03.379 } 00:16:03.379 ] 00:16:03.379 }' 00:16:03.379 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.379 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.379 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:03.639 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=627 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.639 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.639 "name": "raid_bdev1", 00:16:03.639 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:03.639 "strip_size_kb": 64, 00:16:03.639 "state": "online", 00:16:03.639 "raid_level": "raid5f", 00:16:03.639 "superblock": true, 00:16:03.639 "num_base_bdevs": 4, 00:16:03.639 "num_base_bdevs_discovered": 4, 00:16:03.640 "num_base_bdevs_operational": 4, 00:16:03.640 "process": { 00:16:03.640 "type": "rebuild", 00:16:03.640 "target": "spare", 00:16:03.640 "progress": { 00:16:03.640 "blocks": 21120, 00:16:03.640 "percent": 11 00:16:03.640 } 00:16:03.640 }, 00:16:03.640 "base_bdevs_list": [ 00:16:03.640 { 00:16:03.640 "name": "spare", 00:16:03.640 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:03.640 "is_configured": true, 00:16:03.640 "data_offset": 2048, 00:16:03.640 "data_size": 63488 00:16:03.640 }, 00:16:03.640 { 00:16:03.640 "name": "BaseBdev2", 00:16:03.640 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:03.640 "is_configured": true, 00:16:03.640 "data_offset": 2048, 00:16:03.640 "data_size": 63488 00:16:03.640 }, 00:16:03.640 { 00:16:03.640 "name": "BaseBdev3", 00:16:03.640 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:03.640 "is_configured": true, 00:16:03.640 "data_offset": 2048, 00:16:03.640 "data_size": 63488 00:16:03.640 }, 00:16:03.640 { 00:16:03.640 "name": "BaseBdev4", 00:16:03.640 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:03.640 "is_configured": true, 00:16:03.640 "data_offset": 2048, 00:16:03.640 "data_size": 63488 00:16:03.640 } 00:16:03.640 ] 00:16:03.640 }' 00:16:03.640 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.640 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.640 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.640 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.640 02:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.589 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.865 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.865 "name": "raid_bdev1", 00:16:04.865 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:04.865 "strip_size_kb": 64, 00:16:04.865 "state": "online", 00:16:04.865 "raid_level": "raid5f", 00:16:04.865 "superblock": true, 00:16:04.865 "num_base_bdevs": 4, 00:16:04.865 "num_base_bdevs_discovered": 4, 00:16:04.865 "num_base_bdevs_operational": 4, 00:16:04.865 "process": { 00:16:04.865 "type": "rebuild", 00:16:04.865 "target": "spare", 00:16:04.865 "progress": { 00:16:04.865 "blocks": 42240, 00:16:04.865 "percent": 22 00:16:04.865 } 00:16:04.865 }, 00:16:04.865 "base_bdevs_list": [ 00:16:04.865 { 00:16:04.865 "name": "spare", 00:16:04.865 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:04.865 "is_configured": true, 00:16:04.865 "data_offset": 2048, 00:16:04.865 "data_size": 63488 00:16:04.865 }, 00:16:04.866 { 00:16:04.866 "name": "BaseBdev2", 00:16:04.866 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:04.866 "is_configured": true, 00:16:04.866 "data_offset": 2048, 00:16:04.866 "data_size": 63488 00:16:04.866 }, 00:16:04.866 { 00:16:04.866 "name": "BaseBdev3", 00:16:04.866 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:04.866 "is_configured": true, 00:16:04.866 "data_offset": 2048, 00:16:04.866 "data_size": 63488 00:16:04.866 }, 00:16:04.866 { 00:16:04.866 "name": "BaseBdev4", 00:16:04.866 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:04.866 "is_configured": true, 00:16:04.866 "data_offset": 2048, 00:16:04.866 "data_size": 63488 00:16:04.866 } 00:16:04.866 ] 00:16:04.866 }' 00:16:04.866 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.866 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.866 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.866 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.866 02:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.819 "name": "raid_bdev1", 00:16:05.819 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:05.819 "strip_size_kb": 64, 00:16:05.819 "state": "online", 00:16:05.819 "raid_level": "raid5f", 00:16:05.819 "superblock": true, 00:16:05.819 "num_base_bdevs": 4, 00:16:05.819 "num_base_bdevs_discovered": 4, 00:16:05.819 "num_base_bdevs_operational": 4, 00:16:05.819 "process": { 00:16:05.819 "type": "rebuild", 00:16:05.819 "target": "spare", 00:16:05.819 "progress": { 00:16:05.819 "blocks": 65280, 00:16:05.819 "percent": 34 00:16:05.819 } 00:16:05.819 }, 00:16:05.819 "base_bdevs_list": [ 00:16:05.819 { 00:16:05.819 "name": "spare", 00:16:05.819 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:05.819 "is_configured": true, 00:16:05.819 "data_offset": 2048, 00:16:05.819 "data_size": 63488 00:16:05.819 }, 00:16:05.819 { 00:16:05.819 "name": "BaseBdev2", 00:16:05.819 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:05.819 "is_configured": true, 00:16:05.819 "data_offset": 2048, 00:16:05.819 "data_size": 63488 00:16:05.819 }, 00:16:05.819 { 00:16:05.819 "name": "BaseBdev3", 00:16:05.819 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:05.819 "is_configured": true, 00:16:05.819 "data_offset": 2048, 00:16:05.819 "data_size": 63488 00:16:05.819 }, 00:16:05.819 { 00:16:05.819 "name": "BaseBdev4", 00:16:05.819 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:05.819 "is_configured": true, 00:16:05.819 "data_offset": 2048, 00:16:05.819 "data_size": 63488 00:16:05.819 } 00:16:05.819 ] 00:16:05.819 }' 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.819 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.079 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.079 02:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.019 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.019 "name": "raid_bdev1", 00:16:07.019 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:07.019 "strip_size_kb": 64, 00:16:07.019 "state": "online", 00:16:07.019 "raid_level": "raid5f", 00:16:07.019 "superblock": true, 00:16:07.019 "num_base_bdevs": 4, 00:16:07.019 "num_base_bdevs_discovered": 4, 00:16:07.019 "num_base_bdevs_operational": 4, 00:16:07.019 "process": { 00:16:07.019 "type": "rebuild", 00:16:07.019 "target": "spare", 00:16:07.019 "progress": { 00:16:07.020 "blocks": 86400, 00:16:07.020 "percent": 45 00:16:07.020 } 00:16:07.020 }, 00:16:07.020 "base_bdevs_list": [ 00:16:07.020 { 00:16:07.020 "name": "spare", 00:16:07.020 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:07.020 "is_configured": true, 00:16:07.020 "data_offset": 2048, 00:16:07.020 "data_size": 63488 00:16:07.020 }, 00:16:07.020 { 00:16:07.020 "name": "BaseBdev2", 00:16:07.020 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:07.020 "is_configured": true, 00:16:07.020 "data_offset": 2048, 00:16:07.020 "data_size": 63488 00:16:07.020 }, 00:16:07.020 { 00:16:07.020 "name": "BaseBdev3", 00:16:07.020 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:07.020 "is_configured": true, 00:16:07.020 "data_offset": 2048, 00:16:07.020 "data_size": 63488 00:16:07.020 }, 00:16:07.020 { 00:16:07.020 "name": "BaseBdev4", 00:16:07.020 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:07.020 "is_configured": true, 00:16:07.020 "data_offset": 2048, 00:16:07.020 "data_size": 63488 00:16:07.020 } 00:16:07.020 ] 00:16:07.020 }' 00:16:07.020 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.020 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.020 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.020 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.020 02:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.403 "name": "raid_bdev1", 00:16:08.403 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:08.403 "strip_size_kb": 64, 00:16:08.403 "state": "online", 00:16:08.403 "raid_level": "raid5f", 00:16:08.403 "superblock": true, 00:16:08.403 "num_base_bdevs": 4, 00:16:08.403 "num_base_bdevs_discovered": 4, 00:16:08.403 "num_base_bdevs_operational": 4, 00:16:08.403 "process": { 00:16:08.403 "type": "rebuild", 00:16:08.403 "target": "spare", 00:16:08.403 "progress": { 00:16:08.403 "blocks": 107520, 00:16:08.403 "percent": 56 00:16:08.403 } 00:16:08.403 }, 00:16:08.403 "base_bdevs_list": [ 00:16:08.403 { 00:16:08.403 "name": "spare", 00:16:08.403 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:08.403 "is_configured": true, 00:16:08.403 "data_offset": 2048, 00:16:08.403 "data_size": 63488 00:16:08.403 }, 00:16:08.403 { 00:16:08.403 "name": "BaseBdev2", 00:16:08.403 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:08.403 "is_configured": true, 00:16:08.403 "data_offset": 2048, 00:16:08.403 "data_size": 63488 00:16:08.403 }, 00:16:08.403 { 00:16:08.403 "name": "BaseBdev3", 00:16:08.403 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:08.403 "is_configured": true, 00:16:08.403 "data_offset": 2048, 00:16:08.403 "data_size": 63488 00:16:08.403 }, 00:16:08.403 { 00:16:08.403 "name": "BaseBdev4", 00:16:08.403 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:08.403 "is_configured": true, 00:16:08.403 "data_offset": 2048, 00:16:08.403 "data_size": 63488 00:16:08.403 } 00:16:08.403 ] 00:16:08.403 }' 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.403 02:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.347 "name": "raid_bdev1", 00:16:09.347 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:09.347 "strip_size_kb": 64, 00:16:09.347 "state": "online", 00:16:09.347 "raid_level": "raid5f", 00:16:09.347 "superblock": true, 00:16:09.347 "num_base_bdevs": 4, 00:16:09.347 "num_base_bdevs_discovered": 4, 00:16:09.347 "num_base_bdevs_operational": 4, 00:16:09.347 "process": { 00:16:09.347 "type": "rebuild", 00:16:09.347 "target": "spare", 00:16:09.347 "progress": { 00:16:09.347 "blocks": 130560, 00:16:09.347 "percent": 68 00:16:09.347 } 00:16:09.347 }, 00:16:09.347 "base_bdevs_list": [ 00:16:09.347 { 00:16:09.347 "name": "spare", 00:16:09.347 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:09.347 "is_configured": true, 00:16:09.347 "data_offset": 2048, 00:16:09.347 "data_size": 63488 00:16:09.347 }, 00:16:09.347 { 00:16:09.347 "name": "BaseBdev2", 00:16:09.347 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:09.347 "is_configured": true, 00:16:09.347 "data_offset": 2048, 00:16:09.347 "data_size": 63488 00:16:09.347 }, 00:16:09.347 { 00:16:09.347 "name": "BaseBdev3", 00:16:09.347 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:09.347 "is_configured": true, 00:16:09.347 "data_offset": 2048, 00:16:09.347 "data_size": 63488 00:16:09.347 }, 00:16:09.347 { 00:16:09.347 "name": "BaseBdev4", 00:16:09.347 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:09.347 "is_configured": true, 00:16:09.347 "data_offset": 2048, 00:16:09.347 "data_size": 63488 00:16:09.347 } 00:16:09.347 ] 00:16:09.347 }' 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.347 02:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.730 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.730 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.730 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.730 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.730 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.730 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.730 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.731 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.731 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.731 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.731 02:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.731 02:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.731 "name": "raid_bdev1", 00:16:10.731 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:10.731 "strip_size_kb": 64, 00:16:10.731 "state": "online", 00:16:10.731 "raid_level": "raid5f", 00:16:10.731 "superblock": true, 00:16:10.731 "num_base_bdevs": 4, 00:16:10.731 "num_base_bdevs_discovered": 4, 00:16:10.731 "num_base_bdevs_operational": 4, 00:16:10.731 "process": { 00:16:10.731 "type": "rebuild", 00:16:10.731 "target": "spare", 00:16:10.731 "progress": { 00:16:10.731 "blocks": 151680, 00:16:10.731 "percent": 79 00:16:10.731 } 00:16:10.731 }, 00:16:10.731 "base_bdevs_list": [ 00:16:10.731 { 00:16:10.731 "name": "spare", 00:16:10.731 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:10.731 "is_configured": true, 00:16:10.731 "data_offset": 2048, 00:16:10.731 "data_size": 63488 00:16:10.731 }, 00:16:10.731 { 00:16:10.731 "name": "BaseBdev2", 00:16:10.731 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:10.731 "is_configured": true, 00:16:10.731 "data_offset": 2048, 00:16:10.731 "data_size": 63488 00:16:10.731 }, 00:16:10.731 { 00:16:10.731 "name": "BaseBdev3", 00:16:10.731 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:10.731 "is_configured": true, 00:16:10.731 "data_offset": 2048, 00:16:10.731 "data_size": 63488 00:16:10.731 }, 00:16:10.731 { 00:16:10.731 "name": "BaseBdev4", 00:16:10.731 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:10.731 "is_configured": true, 00:16:10.731 "data_offset": 2048, 00:16:10.731 "data_size": 63488 00:16:10.731 } 00:16:10.731 ] 00:16:10.731 }' 00:16:10.731 02:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.731 02:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.731 02:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.731 02:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.731 02:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.673 "name": "raid_bdev1", 00:16:11.673 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:11.673 "strip_size_kb": 64, 00:16:11.673 "state": "online", 00:16:11.673 "raid_level": "raid5f", 00:16:11.673 "superblock": true, 00:16:11.673 "num_base_bdevs": 4, 00:16:11.673 "num_base_bdevs_discovered": 4, 00:16:11.673 "num_base_bdevs_operational": 4, 00:16:11.673 "process": { 00:16:11.673 "type": "rebuild", 00:16:11.673 "target": "spare", 00:16:11.673 "progress": { 00:16:11.673 "blocks": 174720, 00:16:11.673 "percent": 91 00:16:11.673 } 00:16:11.673 }, 00:16:11.673 "base_bdevs_list": [ 00:16:11.673 { 00:16:11.673 "name": "spare", 00:16:11.673 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:11.673 "is_configured": true, 00:16:11.673 "data_offset": 2048, 00:16:11.673 "data_size": 63488 00:16:11.673 }, 00:16:11.673 { 00:16:11.673 "name": "BaseBdev2", 00:16:11.673 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:11.673 "is_configured": true, 00:16:11.673 "data_offset": 2048, 00:16:11.673 "data_size": 63488 00:16:11.673 }, 00:16:11.673 { 00:16:11.673 "name": "BaseBdev3", 00:16:11.673 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:11.673 "is_configured": true, 00:16:11.673 "data_offset": 2048, 00:16:11.673 "data_size": 63488 00:16:11.673 }, 00:16:11.673 { 00:16:11.673 "name": "BaseBdev4", 00:16:11.673 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:11.673 "is_configured": true, 00:16:11.673 "data_offset": 2048, 00:16:11.673 "data_size": 63488 00:16:11.673 } 00:16:11.673 ] 00:16:11.673 }' 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.673 02:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.612 [2024-11-28 02:31:46.008006] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:12.612 [2024-11-28 02:31:46.008074] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:12.612 [2024-11-28 02:31:46.008199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.612 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.612 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.612 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.612 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.612 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.612 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.873 "name": "raid_bdev1", 00:16:12.873 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:12.873 "strip_size_kb": 64, 00:16:12.873 "state": "online", 00:16:12.873 "raid_level": "raid5f", 00:16:12.873 "superblock": true, 00:16:12.873 "num_base_bdevs": 4, 00:16:12.873 "num_base_bdevs_discovered": 4, 00:16:12.873 "num_base_bdevs_operational": 4, 00:16:12.873 "base_bdevs_list": [ 00:16:12.873 { 00:16:12.873 "name": "spare", 00:16:12.873 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:12.873 "is_configured": true, 00:16:12.873 "data_offset": 2048, 00:16:12.873 "data_size": 63488 00:16:12.873 }, 00:16:12.873 { 00:16:12.873 "name": "BaseBdev2", 00:16:12.873 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:12.873 "is_configured": true, 00:16:12.873 "data_offset": 2048, 00:16:12.873 "data_size": 63488 00:16:12.873 }, 00:16:12.873 { 00:16:12.873 "name": "BaseBdev3", 00:16:12.873 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:12.873 "is_configured": true, 00:16:12.873 "data_offset": 2048, 00:16:12.873 "data_size": 63488 00:16:12.873 }, 00:16:12.873 { 00:16:12.873 "name": "BaseBdev4", 00:16:12.873 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:12.873 "is_configured": true, 00:16:12.873 "data_offset": 2048, 00:16:12.873 "data_size": 63488 00:16:12.873 } 00:16:12.873 ] 00:16:12.873 }' 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.873 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.874 "name": "raid_bdev1", 00:16:12.874 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:12.874 "strip_size_kb": 64, 00:16:12.874 "state": "online", 00:16:12.874 "raid_level": "raid5f", 00:16:12.874 "superblock": true, 00:16:12.874 "num_base_bdevs": 4, 00:16:12.874 "num_base_bdevs_discovered": 4, 00:16:12.874 "num_base_bdevs_operational": 4, 00:16:12.874 "base_bdevs_list": [ 00:16:12.874 { 00:16:12.874 "name": "spare", 00:16:12.874 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:12.874 "is_configured": true, 00:16:12.874 "data_offset": 2048, 00:16:12.874 "data_size": 63488 00:16:12.874 }, 00:16:12.874 { 00:16:12.874 "name": "BaseBdev2", 00:16:12.874 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:12.874 "is_configured": true, 00:16:12.874 "data_offset": 2048, 00:16:12.874 "data_size": 63488 00:16:12.874 }, 00:16:12.874 { 00:16:12.874 "name": "BaseBdev3", 00:16:12.874 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:12.874 "is_configured": true, 00:16:12.874 "data_offset": 2048, 00:16:12.874 "data_size": 63488 00:16:12.874 }, 00:16:12.874 { 00:16:12.874 "name": "BaseBdev4", 00:16:12.874 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:12.874 "is_configured": true, 00:16:12.874 "data_offset": 2048, 00:16:12.874 "data_size": 63488 00:16:12.874 } 00:16:12.874 ] 00:16:12.874 }' 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.874 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.134 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.135 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.135 "name": "raid_bdev1", 00:16:13.135 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:13.135 "strip_size_kb": 64, 00:16:13.135 "state": "online", 00:16:13.135 "raid_level": "raid5f", 00:16:13.135 "superblock": true, 00:16:13.135 "num_base_bdevs": 4, 00:16:13.135 "num_base_bdevs_discovered": 4, 00:16:13.135 "num_base_bdevs_operational": 4, 00:16:13.135 "base_bdevs_list": [ 00:16:13.135 { 00:16:13.135 "name": "spare", 00:16:13.135 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:13.135 "is_configured": true, 00:16:13.135 "data_offset": 2048, 00:16:13.135 "data_size": 63488 00:16:13.135 }, 00:16:13.135 { 00:16:13.135 "name": "BaseBdev2", 00:16:13.135 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:13.135 "is_configured": true, 00:16:13.135 "data_offset": 2048, 00:16:13.135 "data_size": 63488 00:16:13.135 }, 00:16:13.135 { 00:16:13.135 "name": "BaseBdev3", 00:16:13.135 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:13.135 "is_configured": true, 00:16:13.135 "data_offset": 2048, 00:16:13.135 "data_size": 63488 00:16:13.135 }, 00:16:13.135 { 00:16:13.135 "name": "BaseBdev4", 00:16:13.135 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:13.135 "is_configured": true, 00:16:13.135 "data_offset": 2048, 00:16:13.135 "data_size": 63488 00:16:13.135 } 00:16:13.135 ] 00:16:13.135 }' 00:16:13.135 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.135 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.395 [2024-11-28 02:31:46.955862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:13.395 [2024-11-28 02:31:46.955952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.395 [2024-11-28 02:31:46.956059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.395 [2024-11-28 02:31:46.956174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.395 [2024-11-28 02:31:46.956239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:13.395 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:13.396 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:13.396 02:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:13.396 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:13.396 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:13.656 /dev/nbd0 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.656 1+0 records in 00:16:13.656 1+0 records out 00:16:13.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422418 s, 9.7 MB/s 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:13.656 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:13.916 /dev/nbd1 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.916 1+0 records in 00:16:13.916 1+0 records out 00:16:13.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298819 s, 13.7 MB/s 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:13.916 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:14.176 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:14.176 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:14.176 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:14.176 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:14.176 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:14.176 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:14.176 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:14.176 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:14.436 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:14.436 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:14.436 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:14.436 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:14.436 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:14.436 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:14.436 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:14.436 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:14.436 02:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.436 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 [2024-11-28 02:31:48.080215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:14.436 [2024-11-28 02:31:48.080274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.437 [2024-11-28 02:31:48.080300] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:14.437 [2024-11-28 02:31:48.080309] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.437 [2024-11-28 02:31:48.082523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.437 [2024-11-28 02:31:48.082605] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:14.437 [2024-11-28 02:31:48.082701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:14.437 [2024-11-28 02:31:48.082760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.437 [2024-11-28 02:31:48.082912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.437 [2024-11-28 02:31:48.083027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.437 [2024-11-28 02:31:48.083109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:14.437 spare 00:16:14.437 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.437 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:14.437 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.437 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.696 [2024-11-28 02:31:48.183003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:14.697 [2024-11-28 02:31:48.183028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:14.697 [2024-11-28 02:31:48.183266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:14.697 [2024-11-28 02:31:48.190235] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:14.697 [2024-11-28 02:31:48.190253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:14.697 [2024-11-28 02:31:48.190421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.697 "name": "raid_bdev1", 00:16:14.697 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:14.697 "strip_size_kb": 64, 00:16:14.697 "state": "online", 00:16:14.697 "raid_level": "raid5f", 00:16:14.697 "superblock": true, 00:16:14.697 "num_base_bdevs": 4, 00:16:14.697 "num_base_bdevs_discovered": 4, 00:16:14.697 "num_base_bdevs_operational": 4, 00:16:14.697 "base_bdevs_list": [ 00:16:14.697 { 00:16:14.697 "name": "spare", 00:16:14.697 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:14.697 "is_configured": true, 00:16:14.697 "data_offset": 2048, 00:16:14.697 "data_size": 63488 00:16:14.697 }, 00:16:14.697 { 00:16:14.697 "name": "BaseBdev2", 00:16:14.697 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:14.697 "is_configured": true, 00:16:14.697 "data_offset": 2048, 00:16:14.697 "data_size": 63488 00:16:14.697 }, 00:16:14.697 { 00:16:14.697 "name": "BaseBdev3", 00:16:14.697 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:14.697 "is_configured": true, 00:16:14.697 "data_offset": 2048, 00:16:14.697 "data_size": 63488 00:16:14.697 }, 00:16:14.697 { 00:16:14.697 "name": "BaseBdev4", 00:16:14.697 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:14.697 "is_configured": true, 00:16:14.697 "data_offset": 2048, 00:16:14.697 "data_size": 63488 00:16:14.697 } 00:16:14.697 ] 00:16:14.697 }' 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.697 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.267 "name": "raid_bdev1", 00:16:15.267 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:15.267 "strip_size_kb": 64, 00:16:15.267 "state": "online", 00:16:15.267 "raid_level": "raid5f", 00:16:15.267 "superblock": true, 00:16:15.267 "num_base_bdevs": 4, 00:16:15.267 "num_base_bdevs_discovered": 4, 00:16:15.267 "num_base_bdevs_operational": 4, 00:16:15.267 "base_bdevs_list": [ 00:16:15.267 { 00:16:15.267 "name": "spare", 00:16:15.267 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:15.267 "is_configured": true, 00:16:15.267 "data_offset": 2048, 00:16:15.267 "data_size": 63488 00:16:15.267 }, 00:16:15.267 { 00:16:15.267 "name": "BaseBdev2", 00:16:15.267 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:15.267 "is_configured": true, 00:16:15.267 "data_offset": 2048, 00:16:15.267 "data_size": 63488 00:16:15.267 }, 00:16:15.267 { 00:16:15.267 "name": "BaseBdev3", 00:16:15.267 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:15.267 "is_configured": true, 00:16:15.267 "data_offset": 2048, 00:16:15.267 "data_size": 63488 00:16:15.267 }, 00:16:15.267 { 00:16:15.267 "name": "BaseBdev4", 00:16:15.267 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:15.267 "is_configured": true, 00:16:15.267 "data_offset": 2048, 00:16:15.267 "data_size": 63488 00:16:15.267 } 00:16:15.267 ] 00:16:15.267 }' 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.267 [2024-11-28 02:31:48.857226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.267 "name": "raid_bdev1", 00:16:15.267 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:15.267 "strip_size_kb": 64, 00:16:15.267 "state": "online", 00:16:15.267 "raid_level": "raid5f", 00:16:15.267 "superblock": true, 00:16:15.267 "num_base_bdevs": 4, 00:16:15.267 "num_base_bdevs_discovered": 3, 00:16:15.267 "num_base_bdevs_operational": 3, 00:16:15.267 "base_bdevs_list": [ 00:16:15.267 { 00:16:15.267 "name": null, 00:16:15.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.267 "is_configured": false, 00:16:15.267 "data_offset": 0, 00:16:15.267 "data_size": 63488 00:16:15.267 }, 00:16:15.267 { 00:16:15.267 "name": "BaseBdev2", 00:16:15.267 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:15.267 "is_configured": true, 00:16:15.267 "data_offset": 2048, 00:16:15.267 "data_size": 63488 00:16:15.267 }, 00:16:15.267 { 00:16:15.267 "name": "BaseBdev3", 00:16:15.267 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:15.267 "is_configured": true, 00:16:15.267 "data_offset": 2048, 00:16:15.267 "data_size": 63488 00:16:15.267 }, 00:16:15.267 { 00:16:15.267 "name": "BaseBdev4", 00:16:15.267 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:15.267 "is_configured": true, 00:16:15.267 "data_offset": 2048, 00:16:15.267 "data_size": 63488 00:16:15.267 } 00:16:15.267 ] 00:16:15.267 }' 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.267 02:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.838 02:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:15.838 02:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.838 02:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.838 [2024-11-28 02:31:49.304474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.838 [2024-11-28 02:31:49.304690] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:15.838 [2024-11-28 02:31:49.304753] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:15.838 [2024-11-28 02:31:49.304834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.838 [2024-11-28 02:31:49.319390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:15.838 02:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.838 02:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:15.838 [2024-11-28 02:31:49.328387] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.779 "name": "raid_bdev1", 00:16:16.779 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:16.779 "strip_size_kb": 64, 00:16:16.779 "state": "online", 00:16:16.779 "raid_level": "raid5f", 00:16:16.779 "superblock": true, 00:16:16.779 "num_base_bdevs": 4, 00:16:16.779 "num_base_bdevs_discovered": 4, 00:16:16.779 "num_base_bdevs_operational": 4, 00:16:16.779 "process": { 00:16:16.779 "type": "rebuild", 00:16:16.779 "target": "spare", 00:16:16.779 "progress": { 00:16:16.779 "blocks": 19200, 00:16:16.779 "percent": 10 00:16:16.779 } 00:16:16.779 }, 00:16:16.779 "base_bdevs_list": [ 00:16:16.779 { 00:16:16.779 "name": "spare", 00:16:16.779 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:16.779 "is_configured": true, 00:16:16.779 "data_offset": 2048, 00:16:16.779 "data_size": 63488 00:16:16.779 }, 00:16:16.779 { 00:16:16.779 "name": "BaseBdev2", 00:16:16.779 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:16.779 "is_configured": true, 00:16:16.779 "data_offset": 2048, 00:16:16.779 "data_size": 63488 00:16:16.779 }, 00:16:16.779 { 00:16:16.779 "name": "BaseBdev3", 00:16:16.779 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:16.779 "is_configured": true, 00:16:16.779 "data_offset": 2048, 00:16:16.779 "data_size": 63488 00:16:16.779 }, 00:16:16.779 { 00:16:16.779 "name": "BaseBdev4", 00:16:16.779 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:16.779 "is_configured": true, 00:16:16.779 "data_offset": 2048, 00:16:16.779 "data_size": 63488 00:16:16.779 } 00:16:16.779 ] 00:16:16.779 }' 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.779 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.039 [2024-11-28 02:31:50.479753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.039 [2024-11-28 02:31:50.534213] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:17.039 [2024-11-28 02:31:50.534330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.039 [2024-11-28 02:31:50.534348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.039 [2024-11-28 02:31:50.534358] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.039 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.039 "name": "raid_bdev1", 00:16:17.039 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:17.039 "strip_size_kb": 64, 00:16:17.039 "state": "online", 00:16:17.039 "raid_level": "raid5f", 00:16:17.039 "superblock": true, 00:16:17.039 "num_base_bdevs": 4, 00:16:17.039 "num_base_bdevs_discovered": 3, 00:16:17.039 "num_base_bdevs_operational": 3, 00:16:17.039 "base_bdevs_list": [ 00:16:17.039 { 00:16:17.039 "name": null, 00:16:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.039 "is_configured": false, 00:16:17.039 "data_offset": 0, 00:16:17.039 "data_size": 63488 00:16:17.039 }, 00:16:17.039 { 00:16:17.039 "name": "BaseBdev2", 00:16:17.039 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:17.039 "is_configured": true, 00:16:17.039 "data_offset": 2048, 00:16:17.039 "data_size": 63488 00:16:17.039 }, 00:16:17.040 { 00:16:17.040 "name": "BaseBdev3", 00:16:17.040 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:17.040 "is_configured": true, 00:16:17.040 "data_offset": 2048, 00:16:17.040 "data_size": 63488 00:16:17.040 }, 00:16:17.040 { 00:16:17.040 "name": "BaseBdev4", 00:16:17.040 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:17.040 "is_configured": true, 00:16:17.040 "data_offset": 2048, 00:16:17.040 "data_size": 63488 00:16:17.040 } 00:16:17.040 ] 00:16:17.040 }' 00:16:17.040 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.040 02:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.610 02:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:17.610 02:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.610 02:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.610 [2024-11-28 02:31:51.046942] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:17.610 [2024-11-28 02:31:51.047047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.610 [2024-11-28 02:31:51.047090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:17.610 [2024-11-28 02:31:51.047121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.610 [2024-11-28 02:31:51.047685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.610 [2024-11-28 02:31:51.047754] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:17.610 [2024-11-28 02:31:51.047893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:17.610 [2024-11-28 02:31:51.047960] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:17.610 [2024-11-28 02:31:51.048009] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:17.610 [2024-11-28 02:31:51.048070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.610 [2024-11-28 02:31:51.062284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:17.610 spare 00:16:17.610 02:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.610 02:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:17.610 [2024-11-28 02:31:51.070790] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.563 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.563 "name": "raid_bdev1", 00:16:18.563 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:18.563 "strip_size_kb": 64, 00:16:18.563 "state": "online", 00:16:18.563 "raid_level": "raid5f", 00:16:18.563 "superblock": true, 00:16:18.563 "num_base_bdevs": 4, 00:16:18.563 "num_base_bdevs_discovered": 4, 00:16:18.563 "num_base_bdevs_operational": 4, 00:16:18.563 "process": { 00:16:18.563 "type": "rebuild", 00:16:18.563 "target": "spare", 00:16:18.563 "progress": { 00:16:18.563 "blocks": 19200, 00:16:18.563 "percent": 10 00:16:18.563 } 00:16:18.563 }, 00:16:18.563 "base_bdevs_list": [ 00:16:18.563 { 00:16:18.563 "name": "spare", 00:16:18.563 "uuid": "d1d55d87-338b-5b5c-8b5f-e6429d05f81f", 00:16:18.563 "is_configured": true, 00:16:18.564 "data_offset": 2048, 00:16:18.564 "data_size": 63488 00:16:18.564 }, 00:16:18.564 { 00:16:18.564 "name": "BaseBdev2", 00:16:18.564 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:18.564 "is_configured": true, 00:16:18.564 "data_offset": 2048, 00:16:18.564 "data_size": 63488 00:16:18.564 }, 00:16:18.564 { 00:16:18.564 "name": "BaseBdev3", 00:16:18.564 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:18.564 "is_configured": true, 00:16:18.564 "data_offset": 2048, 00:16:18.564 "data_size": 63488 00:16:18.564 }, 00:16:18.564 { 00:16:18.564 "name": "BaseBdev4", 00:16:18.564 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:18.564 "is_configured": true, 00:16:18.564 "data_offset": 2048, 00:16:18.564 "data_size": 63488 00:16:18.564 } 00:16:18.564 ] 00:16:18.564 }' 00:16:18.564 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.564 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.564 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.564 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.564 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:18.564 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.564 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.564 [2024-11-28 02:31:52.185641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.836 [2024-11-28 02:31:52.276529] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:18.836 [2024-11-28 02:31:52.276581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.836 [2024-11-28 02:31:52.276600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.836 [2024-11-28 02:31:52.276607] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:18.836 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.836 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:18.836 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.836 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.836 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.836 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.837 "name": "raid_bdev1", 00:16:18.837 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:18.837 "strip_size_kb": 64, 00:16:18.837 "state": "online", 00:16:18.837 "raid_level": "raid5f", 00:16:18.837 "superblock": true, 00:16:18.837 "num_base_bdevs": 4, 00:16:18.837 "num_base_bdevs_discovered": 3, 00:16:18.837 "num_base_bdevs_operational": 3, 00:16:18.837 "base_bdevs_list": [ 00:16:18.837 { 00:16:18.837 "name": null, 00:16:18.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.837 "is_configured": false, 00:16:18.837 "data_offset": 0, 00:16:18.837 "data_size": 63488 00:16:18.837 }, 00:16:18.837 { 00:16:18.837 "name": "BaseBdev2", 00:16:18.837 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:18.837 "is_configured": true, 00:16:18.837 "data_offset": 2048, 00:16:18.837 "data_size": 63488 00:16:18.837 }, 00:16:18.837 { 00:16:18.837 "name": "BaseBdev3", 00:16:18.837 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:18.837 "is_configured": true, 00:16:18.837 "data_offset": 2048, 00:16:18.837 "data_size": 63488 00:16:18.837 }, 00:16:18.837 { 00:16:18.837 "name": "BaseBdev4", 00:16:18.837 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:18.837 "is_configured": true, 00:16:18.837 "data_offset": 2048, 00:16:18.837 "data_size": 63488 00:16:18.837 } 00:16:18.837 ] 00:16:18.837 }' 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.837 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.097 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.097 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.097 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.097 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.097 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.097 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.097 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.097 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.097 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.357 "name": "raid_bdev1", 00:16:19.357 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:19.357 "strip_size_kb": 64, 00:16:19.357 "state": "online", 00:16:19.357 "raid_level": "raid5f", 00:16:19.357 "superblock": true, 00:16:19.357 "num_base_bdevs": 4, 00:16:19.357 "num_base_bdevs_discovered": 3, 00:16:19.357 "num_base_bdevs_operational": 3, 00:16:19.357 "base_bdevs_list": [ 00:16:19.357 { 00:16:19.357 "name": null, 00:16:19.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.357 "is_configured": false, 00:16:19.357 "data_offset": 0, 00:16:19.357 "data_size": 63488 00:16:19.357 }, 00:16:19.357 { 00:16:19.357 "name": "BaseBdev2", 00:16:19.357 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:19.357 "is_configured": true, 00:16:19.357 "data_offset": 2048, 00:16:19.357 "data_size": 63488 00:16:19.357 }, 00:16:19.357 { 00:16:19.357 "name": "BaseBdev3", 00:16:19.357 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:19.357 "is_configured": true, 00:16:19.357 "data_offset": 2048, 00:16:19.357 "data_size": 63488 00:16:19.357 }, 00:16:19.357 { 00:16:19.357 "name": "BaseBdev4", 00:16:19.357 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:19.357 "is_configured": true, 00:16:19.357 "data_offset": 2048, 00:16:19.357 "data_size": 63488 00:16:19.357 } 00:16:19.357 ] 00:16:19.357 }' 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.357 [2024-11-28 02:31:52.932334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:19.357 [2024-11-28 02:31:52.932423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.357 [2024-11-28 02:31:52.932464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:19.357 [2024-11-28 02:31:52.932473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.357 [2024-11-28 02:31:52.932953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.357 [2024-11-28 02:31:52.932976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:19.357 [2024-11-28 02:31:52.933054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:19.357 [2024-11-28 02:31:52.933067] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:19.357 [2024-11-28 02:31:52.933079] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:19.357 [2024-11-28 02:31:52.933089] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:19.357 BaseBdev1 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.357 02:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.298 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.558 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.558 "name": "raid_bdev1", 00:16:20.558 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:20.558 "strip_size_kb": 64, 00:16:20.558 "state": "online", 00:16:20.558 "raid_level": "raid5f", 00:16:20.558 "superblock": true, 00:16:20.558 "num_base_bdevs": 4, 00:16:20.558 "num_base_bdevs_discovered": 3, 00:16:20.558 "num_base_bdevs_operational": 3, 00:16:20.558 "base_bdevs_list": [ 00:16:20.558 { 00:16:20.558 "name": null, 00:16:20.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.558 "is_configured": false, 00:16:20.558 "data_offset": 0, 00:16:20.558 "data_size": 63488 00:16:20.558 }, 00:16:20.558 { 00:16:20.558 "name": "BaseBdev2", 00:16:20.558 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:20.558 "is_configured": true, 00:16:20.558 "data_offset": 2048, 00:16:20.558 "data_size": 63488 00:16:20.558 }, 00:16:20.558 { 00:16:20.558 "name": "BaseBdev3", 00:16:20.558 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:20.558 "is_configured": true, 00:16:20.558 "data_offset": 2048, 00:16:20.558 "data_size": 63488 00:16:20.558 }, 00:16:20.558 { 00:16:20.558 "name": "BaseBdev4", 00:16:20.558 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:20.558 "is_configured": true, 00:16:20.558 "data_offset": 2048, 00:16:20.558 "data_size": 63488 00:16:20.558 } 00:16:20.558 ] 00:16:20.558 }' 00:16:20.558 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.558 02:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.818 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.818 "name": "raid_bdev1", 00:16:20.818 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:20.818 "strip_size_kb": 64, 00:16:20.818 "state": "online", 00:16:20.818 "raid_level": "raid5f", 00:16:20.818 "superblock": true, 00:16:20.818 "num_base_bdevs": 4, 00:16:20.818 "num_base_bdevs_discovered": 3, 00:16:20.818 "num_base_bdevs_operational": 3, 00:16:20.818 "base_bdevs_list": [ 00:16:20.818 { 00:16:20.818 "name": null, 00:16:20.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.818 "is_configured": false, 00:16:20.818 "data_offset": 0, 00:16:20.818 "data_size": 63488 00:16:20.818 }, 00:16:20.818 { 00:16:20.818 "name": "BaseBdev2", 00:16:20.818 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:20.818 "is_configured": true, 00:16:20.818 "data_offset": 2048, 00:16:20.818 "data_size": 63488 00:16:20.818 }, 00:16:20.818 { 00:16:20.818 "name": "BaseBdev3", 00:16:20.818 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:20.818 "is_configured": true, 00:16:20.818 "data_offset": 2048, 00:16:20.818 "data_size": 63488 00:16:20.818 }, 00:16:20.818 { 00:16:20.818 "name": "BaseBdev4", 00:16:20.818 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:20.818 "is_configured": true, 00:16:20.818 "data_offset": 2048, 00:16:20.818 "data_size": 63488 00:16:20.818 } 00:16:20.818 ] 00:16:20.818 }' 00:16:20.819 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.819 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.819 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.078 [2024-11-28 02:31:54.529650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.078 [2024-11-28 02:31:54.529873] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:21.078 [2024-11-28 02:31:54.529898] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:21.078 request: 00:16:21.078 { 00:16:21.078 "base_bdev": "BaseBdev1", 00:16:21.078 "raid_bdev": "raid_bdev1", 00:16:21.078 "method": "bdev_raid_add_base_bdev", 00:16:21.078 "req_id": 1 00:16:21.078 } 00:16:21.078 Got JSON-RPC error response 00:16:21.078 response: 00:16:21.078 { 00:16:21.078 "code": -22, 00:16:21.078 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:21.078 } 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.078 02:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:22.015 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.015 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.015 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.015 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.015 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.015 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.015 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.015 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.015 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.015 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.016 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.016 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.016 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.016 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.016 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.016 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.016 "name": "raid_bdev1", 00:16:22.016 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:22.016 "strip_size_kb": 64, 00:16:22.016 "state": "online", 00:16:22.016 "raid_level": "raid5f", 00:16:22.016 "superblock": true, 00:16:22.016 "num_base_bdevs": 4, 00:16:22.016 "num_base_bdevs_discovered": 3, 00:16:22.016 "num_base_bdevs_operational": 3, 00:16:22.016 "base_bdevs_list": [ 00:16:22.016 { 00:16:22.016 "name": null, 00:16:22.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.016 "is_configured": false, 00:16:22.016 "data_offset": 0, 00:16:22.016 "data_size": 63488 00:16:22.016 }, 00:16:22.016 { 00:16:22.016 "name": "BaseBdev2", 00:16:22.016 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:22.016 "is_configured": true, 00:16:22.016 "data_offset": 2048, 00:16:22.016 "data_size": 63488 00:16:22.016 }, 00:16:22.016 { 00:16:22.016 "name": "BaseBdev3", 00:16:22.016 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:22.016 "is_configured": true, 00:16:22.016 "data_offset": 2048, 00:16:22.016 "data_size": 63488 00:16:22.016 }, 00:16:22.016 { 00:16:22.016 "name": "BaseBdev4", 00:16:22.016 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:22.016 "is_configured": true, 00:16:22.016 "data_offset": 2048, 00:16:22.016 "data_size": 63488 00:16:22.016 } 00:16:22.016 ] 00:16:22.016 }' 00:16:22.016 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.016 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.586 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.586 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.586 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.586 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.586 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.586 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.586 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.586 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.586 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.586 02:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.586 "name": "raid_bdev1", 00:16:22.586 "uuid": "ebec8792-f04d-4d56-93f8-7e1df1ba0360", 00:16:22.586 "strip_size_kb": 64, 00:16:22.586 "state": "online", 00:16:22.586 "raid_level": "raid5f", 00:16:22.586 "superblock": true, 00:16:22.586 "num_base_bdevs": 4, 00:16:22.586 "num_base_bdevs_discovered": 3, 00:16:22.586 "num_base_bdevs_operational": 3, 00:16:22.586 "base_bdevs_list": [ 00:16:22.586 { 00:16:22.586 "name": null, 00:16:22.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.586 "is_configured": false, 00:16:22.586 "data_offset": 0, 00:16:22.586 "data_size": 63488 00:16:22.586 }, 00:16:22.586 { 00:16:22.586 "name": "BaseBdev2", 00:16:22.586 "uuid": "2dbc37bb-8f36-5095-a6be-f73c244346f8", 00:16:22.586 "is_configured": true, 00:16:22.586 "data_offset": 2048, 00:16:22.586 "data_size": 63488 00:16:22.586 }, 00:16:22.586 { 00:16:22.586 "name": "BaseBdev3", 00:16:22.586 "uuid": "3eea974c-962c-5e79-8924-786c6f298582", 00:16:22.586 "is_configured": true, 00:16:22.586 "data_offset": 2048, 00:16:22.586 "data_size": 63488 00:16:22.586 }, 00:16:22.586 { 00:16:22.586 "name": "BaseBdev4", 00:16:22.586 "uuid": "64f6575d-4ce4-596a-b0a7-4fceddc77f64", 00:16:22.586 "is_configured": true, 00:16:22.586 "data_offset": 2048, 00:16:22.586 "data_size": 63488 00:16:22.586 } 00:16:22.586 ] 00:16:22.586 }' 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84860 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84860 ']' 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84860 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84860 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.586 killing process with pid 84860 00:16:22.586 Received shutdown signal, test time was about 60.000000 seconds 00:16:22.586 00:16:22.586 Latency(us) 00:16:22.586 [2024-11-28T02:31:56.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.586 [2024-11-28T02:31:56.265Z] =================================================================================================================== 00:16:22.586 [2024-11-28T02:31:56.265Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84860' 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84860 00:16:22.586 [2024-11-28 02:31:56.150591] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.586 [2024-11-28 02:31:56.150708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.586 02:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84860 00:16:22.586 [2024-11-28 02:31:56.150803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.586 [2024-11-28 02:31:56.150817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:23.157 [2024-11-28 02:31:56.603623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.097 02:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:24.097 00:16:24.097 real 0m26.536s 00:16:24.097 user 0m33.314s 00:16:24.097 sys 0m2.806s 00:16:24.097 02:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.097 ************************************ 00:16:24.097 END TEST raid5f_rebuild_test_sb 00:16:24.097 ************************************ 00:16:24.097 02:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.097 02:31:57 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:24.097 02:31:57 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:24.097 02:31:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:24.097 02:31:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.097 02:31:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.097 ************************************ 00:16:24.097 START TEST raid_state_function_test_sb_4k 00:16:24.097 ************************************ 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85672 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85672' 00:16:24.097 Process raid pid: 85672 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85672 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85672 ']' 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.097 02:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.358 [2024-11-28 02:31:57.802040] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:24.358 [2024-11-28 02:31:57.802218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.358 [2024-11-28 02:31:57.975267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.617 [2024-11-28 02:31:58.081104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.617 [2024-11-28 02:31:58.290839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.617 [2024-11-28 02:31:58.290871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.186 [2024-11-28 02:31:58.616097] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.186 [2024-11-28 02:31:58.616195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.186 [2024-11-28 02:31:58.616241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.186 [2024-11-28 02:31:58.616264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.186 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.186 "name": "Existed_Raid", 00:16:25.186 "uuid": "4e40d40a-8962-411a-8144-458380af7276", 00:16:25.187 "strip_size_kb": 0, 00:16:25.187 "state": "configuring", 00:16:25.187 "raid_level": "raid1", 00:16:25.187 "superblock": true, 00:16:25.187 "num_base_bdevs": 2, 00:16:25.187 "num_base_bdevs_discovered": 0, 00:16:25.187 "num_base_bdevs_operational": 2, 00:16:25.187 "base_bdevs_list": [ 00:16:25.187 { 00:16:25.187 "name": "BaseBdev1", 00:16:25.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.187 "is_configured": false, 00:16:25.187 "data_offset": 0, 00:16:25.187 "data_size": 0 00:16:25.187 }, 00:16:25.187 { 00:16:25.187 "name": "BaseBdev2", 00:16:25.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.187 "is_configured": false, 00:16:25.187 "data_offset": 0, 00:16:25.187 "data_size": 0 00:16:25.187 } 00:16:25.187 ] 00:16:25.187 }' 00:16:25.187 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.187 02:31:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.446 [2024-11-28 02:31:59.043306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.446 [2024-11-28 02:31:59.043379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.446 [2024-11-28 02:31:59.055272] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.446 [2024-11-28 02:31:59.055359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.446 [2024-11-28 02:31:59.055385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.446 [2024-11-28 02:31:59.055409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.446 [2024-11-28 02:31:59.101126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.446 BaseBdev1 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.446 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.706 [ 00:16:25.706 { 00:16:25.706 "name": "BaseBdev1", 00:16:25.706 "aliases": [ 00:16:25.706 "9470f7ff-bc82-47b6-9511-c5177e11fbe1" 00:16:25.706 ], 00:16:25.706 "product_name": "Malloc disk", 00:16:25.706 "block_size": 4096, 00:16:25.706 "num_blocks": 8192, 00:16:25.706 "uuid": "9470f7ff-bc82-47b6-9511-c5177e11fbe1", 00:16:25.706 "assigned_rate_limits": { 00:16:25.706 "rw_ios_per_sec": 0, 00:16:25.706 "rw_mbytes_per_sec": 0, 00:16:25.706 "r_mbytes_per_sec": 0, 00:16:25.706 "w_mbytes_per_sec": 0 00:16:25.706 }, 00:16:25.706 "claimed": true, 00:16:25.706 "claim_type": "exclusive_write", 00:16:25.706 "zoned": false, 00:16:25.706 "supported_io_types": { 00:16:25.706 "read": true, 00:16:25.706 "write": true, 00:16:25.706 "unmap": true, 00:16:25.706 "flush": true, 00:16:25.706 "reset": true, 00:16:25.706 "nvme_admin": false, 00:16:25.706 "nvme_io": false, 00:16:25.706 "nvme_io_md": false, 00:16:25.706 "write_zeroes": true, 00:16:25.706 "zcopy": true, 00:16:25.706 "get_zone_info": false, 00:16:25.706 "zone_management": false, 00:16:25.706 "zone_append": false, 00:16:25.706 "compare": false, 00:16:25.706 "compare_and_write": false, 00:16:25.706 "abort": true, 00:16:25.706 "seek_hole": false, 00:16:25.706 "seek_data": false, 00:16:25.706 "copy": true, 00:16:25.706 "nvme_iov_md": false 00:16:25.706 }, 00:16:25.706 "memory_domains": [ 00:16:25.706 { 00:16:25.706 "dma_device_id": "system", 00:16:25.706 "dma_device_type": 1 00:16:25.706 }, 00:16:25.706 { 00:16:25.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.706 "dma_device_type": 2 00:16:25.706 } 00:16:25.706 ], 00:16:25.706 "driver_specific": {} 00:16:25.706 } 00:16:25.706 ] 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.706 "name": "Existed_Raid", 00:16:25.706 "uuid": "31b15d13-12bc-4c27-8097-653fb27c075f", 00:16:25.706 "strip_size_kb": 0, 00:16:25.706 "state": "configuring", 00:16:25.706 "raid_level": "raid1", 00:16:25.706 "superblock": true, 00:16:25.706 "num_base_bdevs": 2, 00:16:25.706 "num_base_bdevs_discovered": 1, 00:16:25.706 "num_base_bdevs_operational": 2, 00:16:25.706 "base_bdevs_list": [ 00:16:25.706 { 00:16:25.706 "name": "BaseBdev1", 00:16:25.706 "uuid": "9470f7ff-bc82-47b6-9511-c5177e11fbe1", 00:16:25.706 "is_configured": true, 00:16:25.706 "data_offset": 256, 00:16:25.706 "data_size": 7936 00:16:25.706 }, 00:16:25.706 { 00:16:25.706 "name": "BaseBdev2", 00:16:25.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.706 "is_configured": false, 00:16:25.706 "data_offset": 0, 00:16:25.706 "data_size": 0 00:16:25.706 } 00:16:25.706 ] 00:16:25.706 }' 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.706 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.966 [2024-11-28 02:31:59.584294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.966 [2024-11-28 02:31:59.584385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.966 [2024-11-28 02:31:59.596316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.966 [2024-11-28 02:31:59.598077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.966 [2024-11-28 02:31:59.598159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.966 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.226 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.226 "name": "Existed_Raid", 00:16:26.226 "uuid": "5b340f54-6568-40a8-8ed8-902182681a97", 00:16:26.226 "strip_size_kb": 0, 00:16:26.226 "state": "configuring", 00:16:26.226 "raid_level": "raid1", 00:16:26.226 "superblock": true, 00:16:26.226 "num_base_bdevs": 2, 00:16:26.226 "num_base_bdevs_discovered": 1, 00:16:26.226 "num_base_bdevs_operational": 2, 00:16:26.226 "base_bdevs_list": [ 00:16:26.226 { 00:16:26.226 "name": "BaseBdev1", 00:16:26.226 "uuid": "9470f7ff-bc82-47b6-9511-c5177e11fbe1", 00:16:26.226 "is_configured": true, 00:16:26.226 "data_offset": 256, 00:16:26.226 "data_size": 7936 00:16:26.226 }, 00:16:26.226 { 00:16:26.226 "name": "BaseBdev2", 00:16:26.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.226 "is_configured": false, 00:16:26.226 "data_offset": 0, 00:16:26.226 "data_size": 0 00:16:26.226 } 00:16:26.226 ] 00:16:26.226 }' 00:16:26.226 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.226 02:31:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.487 [2024-11-28 02:32:00.043505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.487 [2024-11-28 02:32:00.043841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:26.487 [2024-11-28 02:32:00.043891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:26.487 [2024-11-28 02:32:00.044176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:26.487 BaseBdev2 00:16:26.487 [2024-11-28 02:32:00.044378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:26.487 [2024-11-28 02:32:00.044395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:26.487 [2024-11-28 02:32:00.044533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.487 [ 00:16:26.487 { 00:16:26.487 "name": "BaseBdev2", 00:16:26.487 "aliases": [ 00:16:26.487 "456f2262-7d2a-4d6a-861c-f664f237e68f" 00:16:26.487 ], 00:16:26.487 "product_name": "Malloc disk", 00:16:26.487 "block_size": 4096, 00:16:26.487 "num_blocks": 8192, 00:16:26.487 "uuid": "456f2262-7d2a-4d6a-861c-f664f237e68f", 00:16:26.487 "assigned_rate_limits": { 00:16:26.487 "rw_ios_per_sec": 0, 00:16:26.487 "rw_mbytes_per_sec": 0, 00:16:26.487 "r_mbytes_per_sec": 0, 00:16:26.487 "w_mbytes_per_sec": 0 00:16:26.487 }, 00:16:26.487 "claimed": true, 00:16:26.487 "claim_type": "exclusive_write", 00:16:26.487 "zoned": false, 00:16:26.487 "supported_io_types": { 00:16:26.487 "read": true, 00:16:26.487 "write": true, 00:16:26.487 "unmap": true, 00:16:26.487 "flush": true, 00:16:26.487 "reset": true, 00:16:26.487 "nvme_admin": false, 00:16:26.487 "nvme_io": false, 00:16:26.487 "nvme_io_md": false, 00:16:26.487 "write_zeroes": true, 00:16:26.487 "zcopy": true, 00:16:26.487 "get_zone_info": false, 00:16:26.487 "zone_management": false, 00:16:26.487 "zone_append": false, 00:16:26.487 "compare": false, 00:16:26.487 "compare_and_write": false, 00:16:26.487 "abort": true, 00:16:26.487 "seek_hole": false, 00:16:26.487 "seek_data": false, 00:16:26.487 "copy": true, 00:16:26.487 "nvme_iov_md": false 00:16:26.487 }, 00:16:26.487 "memory_domains": [ 00:16:26.487 { 00:16:26.487 "dma_device_id": "system", 00:16:26.487 "dma_device_type": 1 00:16:26.487 }, 00:16:26.487 { 00:16:26.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.487 "dma_device_type": 2 00:16:26.487 } 00:16:26.487 ], 00:16:26.487 "driver_specific": {} 00:16:26.487 } 00:16:26.487 ] 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.487 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.487 "name": "Existed_Raid", 00:16:26.487 "uuid": "5b340f54-6568-40a8-8ed8-902182681a97", 00:16:26.487 "strip_size_kb": 0, 00:16:26.487 "state": "online", 00:16:26.487 "raid_level": "raid1", 00:16:26.487 "superblock": true, 00:16:26.487 "num_base_bdevs": 2, 00:16:26.487 "num_base_bdevs_discovered": 2, 00:16:26.487 "num_base_bdevs_operational": 2, 00:16:26.487 "base_bdevs_list": [ 00:16:26.487 { 00:16:26.487 "name": "BaseBdev1", 00:16:26.487 "uuid": "9470f7ff-bc82-47b6-9511-c5177e11fbe1", 00:16:26.487 "is_configured": true, 00:16:26.487 "data_offset": 256, 00:16:26.487 "data_size": 7936 00:16:26.487 }, 00:16:26.487 { 00:16:26.487 "name": "BaseBdev2", 00:16:26.487 "uuid": "456f2262-7d2a-4d6a-861c-f664f237e68f", 00:16:26.487 "is_configured": true, 00:16:26.487 "data_offset": 256, 00:16:26.487 "data_size": 7936 00:16:26.487 } 00:16:26.488 ] 00:16:26.488 }' 00:16:26.488 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.488 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.058 [2024-11-28 02:32:00.558890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.058 "name": "Existed_Raid", 00:16:27.058 "aliases": [ 00:16:27.058 "5b340f54-6568-40a8-8ed8-902182681a97" 00:16:27.058 ], 00:16:27.058 "product_name": "Raid Volume", 00:16:27.058 "block_size": 4096, 00:16:27.058 "num_blocks": 7936, 00:16:27.058 "uuid": "5b340f54-6568-40a8-8ed8-902182681a97", 00:16:27.058 "assigned_rate_limits": { 00:16:27.058 "rw_ios_per_sec": 0, 00:16:27.058 "rw_mbytes_per_sec": 0, 00:16:27.058 "r_mbytes_per_sec": 0, 00:16:27.058 "w_mbytes_per_sec": 0 00:16:27.058 }, 00:16:27.058 "claimed": false, 00:16:27.058 "zoned": false, 00:16:27.058 "supported_io_types": { 00:16:27.058 "read": true, 00:16:27.058 "write": true, 00:16:27.058 "unmap": false, 00:16:27.058 "flush": false, 00:16:27.058 "reset": true, 00:16:27.058 "nvme_admin": false, 00:16:27.058 "nvme_io": false, 00:16:27.058 "nvme_io_md": false, 00:16:27.058 "write_zeroes": true, 00:16:27.058 "zcopy": false, 00:16:27.058 "get_zone_info": false, 00:16:27.058 "zone_management": false, 00:16:27.058 "zone_append": false, 00:16:27.058 "compare": false, 00:16:27.058 "compare_and_write": false, 00:16:27.058 "abort": false, 00:16:27.058 "seek_hole": false, 00:16:27.058 "seek_data": false, 00:16:27.058 "copy": false, 00:16:27.058 "nvme_iov_md": false 00:16:27.058 }, 00:16:27.058 "memory_domains": [ 00:16:27.058 { 00:16:27.058 "dma_device_id": "system", 00:16:27.058 "dma_device_type": 1 00:16:27.058 }, 00:16:27.058 { 00:16:27.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.058 "dma_device_type": 2 00:16:27.058 }, 00:16:27.058 { 00:16:27.058 "dma_device_id": "system", 00:16:27.058 "dma_device_type": 1 00:16:27.058 }, 00:16:27.058 { 00:16:27.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.058 "dma_device_type": 2 00:16:27.058 } 00:16:27.058 ], 00:16:27.058 "driver_specific": { 00:16:27.058 "raid": { 00:16:27.058 "uuid": "5b340f54-6568-40a8-8ed8-902182681a97", 00:16:27.058 "strip_size_kb": 0, 00:16:27.058 "state": "online", 00:16:27.058 "raid_level": "raid1", 00:16:27.058 "superblock": true, 00:16:27.058 "num_base_bdevs": 2, 00:16:27.058 "num_base_bdevs_discovered": 2, 00:16:27.058 "num_base_bdevs_operational": 2, 00:16:27.058 "base_bdevs_list": [ 00:16:27.058 { 00:16:27.058 "name": "BaseBdev1", 00:16:27.058 "uuid": "9470f7ff-bc82-47b6-9511-c5177e11fbe1", 00:16:27.058 "is_configured": true, 00:16:27.058 "data_offset": 256, 00:16:27.058 "data_size": 7936 00:16:27.058 }, 00:16:27.058 { 00:16:27.058 "name": "BaseBdev2", 00:16:27.058 "uuid": "456f2262-7d2a-4d6a-861c-f664f237e68f", 00:16:27.058 "is_configured": true, 00:16:27.058 "data_offset": 256, 00:16:27.058 "data_size": 7936 00:16:27.058 } 00:16:27.058 ] 00:16:27.058 } 00:16:27.058 } 00:16:27.058 }' 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:27.058 BaseBdev2' 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.058 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.318 [2024-11-28 02:32:00.782283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.318 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.319 "name": "Existed_Raid", 00:16:27.319 "uuid": "5b340f54-6568-40a8-8ed8-902182681a97", 00:16:27.319 "strip_size_kb": 0, 00:16:27.319 "state": "online", 00:16:27.319 "raid_level": "raid1", 00:16:27.319 "superblock": true, 00:16:27.319 "num_base_bdevs": 2, 00:16:27.319 "num_base_bdevs_discovered": 1, 00:16:27.319 "num_base_bdevs_operational": 1, 00:16:27.319 "base_bdevs_list": [ 00:16:27.319 { 00:16:27.319 "name": null, 00:16:27.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.319 "is_configured": false, 00:16:27.319 "data_offset": 0, 00:16:27.319 "data_size": 7936 00:16:27.319 }, 00:16:27.319 { 00:16:27.319 "name": "BaseBdev2", 00:16:27.319 "uuid": "456f2262-7d2a-4d6a-861c-f664f237e68f", 00:16:27.319 "is_configured": true, 00:16:27.319 "data_offset": 256, 00:16:27.319 "data_size": 7936 00:16:27.319 } 00:16:27.319 ] 00:16:27.319 }' 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.319 02:32:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.888 [2024-11-28 02:32:01.364645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.888 [2024-11-28 02:32:01.364802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.888 [2024-11-28 02:32:01.453147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.888 [2024-11-28 02:32:01.453283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.888 [2024-11-28 02:32:01.453325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85672 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85672 ']' 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85672 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85672 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.888 killing process with pid 85672 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85672' 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85672 00:16:27.888 [2024-11-28 02:32:01.546950] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.888 02:32:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85672 00:16:27.888 [2024-11-28 02:32:01.563212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.270 02:32:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:29.270 00:16:29.270 real 0m4.900s 00:16:29.270 user 0m7.119s 00:16:29.270 sys 0m0.805s 00:16:29.270 02:32:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.270 ************************************ 00:16:29.270 END TEST raid_state_function_test_sb_4k 00:16:29.270 ************************************ 00:16:29.270 02:32:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.270 02:32:02 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:29.270 02:32:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:29.270 02:32:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.270 02:32:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.270 ************************************ 00:16:29.270 START TEST raid_superblock_test_4k 00:16:29.270 ************************************ 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85914 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85914 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85914 ']' 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.270 02:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.270 [2024-11-28 02:32:02.770690] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:29.270 [2024-11-28 02:32:02.770801] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85914 ] 00:16:29.270 [2024-11-28 02:32:02.943991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.530 [2024-11-28 02:32:03.049073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.790 [2024-11-28 02:32:03.236707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.790 [2024-11-28 02:32:03.236759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.050 malloc1 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.050 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.050 [2024-11-28 02:32:03.633998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:30.050 [2024-11-28 02:32:03.634098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.050 [2024-11-28 02:32:03.634136] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.051 [2024-11-28 02:32:03.634163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.051 [2024-11-28 02:32:03.636184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.051 [2024-11-28 02:32:03.636270] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:30.051 pt1 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.051 malloc2 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.051 [2024-11-28 02:32:03.683082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.051 [2024-11-28 02:32:03.683188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.051 [2024-11-28 02:32:03.683229] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:30.051 [2024-11-28 02:32:03.683258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.051 [2024-11-28 02:32:03.685244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.051 [2024-11-28 02:32:03.685321] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.051 pt2 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.051 [2024-11-28 02:32:03.695110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:30.051 [2024-11-28 02:32:03.696848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.051 [2024-11-28 02:32:03.697073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:30.051 [2024-11-28 02:32:03.697127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:30.051 [2024-11-28 02:32:03.697366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:30.051 [2024-11-28 02:32:03.697552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:30.051 [2024-11-28 02:32:03.697597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:30.051 [2024-11-28 02:32:03.697784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.051 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.311 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.311 "name": "raid_bdev1", 00:16:30.311 "uuid": "33953730-3347-4a89-b3c0-bf73738b8d47", 00:16:30.312 "strip_size_kb": 0, 00:16:30.312 "state": "online", 00:16:30.312 "raid_level": "raid1", 00:16:30.312 "superblock": true, 00:16:30.312 "num_base_bdevs": 2, 00:16:30.312 "num_base_bdevs_discovered": 2, 00:16:30.312 "num_base_bdevs_operational": 2, 00:16:30.312 "base_bdevs_list": [ 00:16:30.312 { 00:16:30.312 "name": "pt1", 00:16:30.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.312 "is_configured": true, 00:16:30.312 "data_offset": 256, 00:16:30.312 "data_size": 7936 00:16:30.312 }, 00:16:30.312 { 00:16:30.312 "name": "pt2", 00:16:30.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.312 "is_configured": true, 00:16:30.312 "data_offset": 256, 00:16:30.312 "data_size": 7936 00:16:30.312 } 00:16:30.312 ] 00:16:30.312 }' 00:16:30.312 02:32:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.312 02:32:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.572 [2024-11-28 02:32:04.126582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.572 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:30.572 "name": "raid_bdev1", 00:16:30.572 "aliases": [ 00:16:30.572 "33953730-3347-4a89-b3c0-bf73738b8d47" 00:16:30.572 ], 00:16:30.572 "product_name": "Raid Volume", 00:16:30.572 "block_size": 4096, 00:16:30.572 "num_blocks": 7936, 00:16:30.572 "uuid": "33953730-3347-4a89-b3c0-bf73738b8d47", 00:16:30.572 "assigned_rate_limits": { 00:16:30.572 "rw_ios_per_sec": 0, 00:16:30.572 "rw_mbytes_per_sec": 0, 00:16:30.572 "r_mbytes_per_sec": 0, 00:16:30.572 "w_mbytes_per_sec": 0 00:16:30.572 }, 00:16:30.572 "claimed": false, 00:16:30.572 "zoned": false, 00:16:30.572 "supported_io_types": { 00:16:30.572 "read": true, 00:16:30.572 "write": true, 00:16:30.572 "unmap": false, 00:16:30.572 "flush": false, 00:16:30.572 "reset": true, 00:16:30.572 "nvme_admin": false, 00:16:30.572 "nvme_io": false, 00:16:30.572 "nvme_io_md": false, 00:16:30.572 "write_zeroes": true, 00:16:30.572 "zcopy": false, 00:16:30.572 "get_zone_info": false, 00:16:30.572 "zone_management": false, 00:16:30.572 "zone_append": false, 00:16:30.572 "compare": false, 00:16:30.572 "compare_and_write": false, 00:16:30.572 "abort": false, 00:16:30.572 "seek_hole": false, 00:16:30.572 "seek_data": false, 00:16:30.572 "copy": false, 00:16:30.572 "nvme_iov_md": false 00:16:30.572 }, 00:16:30.572 "memory_domains": [ 00:16:30.572 { 00:16:30.572 "dma_device_id": "system", 00:16:30.572 "dma_device_type": 1 00:16:30.572 }, 00:16:30.573 { 00:16:30.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.573 "dma_device_type": 2 00:16:30.573 }, 00:16:30.573 { 00:16:30.573 "dma_device_id": "system", 00:16:30.573 "dma_device_type": 1 00:16:30.573 }, 00:16:30.573 { 00:16:30.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.573 "dma_device_type": 2 00:16:30.573 } 00:16:30.573 ], 00:16:30.573 "driver_specific": { 00:16:30.573 "raid": { 00:16:30.573 "uuid": "33953730-3347-4a89-b3c0-bf73738b8d47", 00:16:30.573 "strip_size_kb": 0, 00:16:30.573 "state": "online", 00:16:30.573 "raid_level": "raid1", 00:16:30.573 "superblock": true, 00:16:30.573 "num_base_bdevs": 2, 00:16:30.573 "num_base_bdevs_discovered": 2, 00:16:30.573 "num_base_bdevs_operational": 2, 00:16:30.573 "base_bdevs_list": [ 00:16:30.573 { 00:16:30.573 "name": "pt1", 00:16:30.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.573 "is_configured": true, 00:16:30.573 "data_offset": 256, 00:16:30.573 "data_size": 7936 00:16:30.573 }, 00:16:30.573 { 00:16:30.573 "name": "pt2", 00:16:30.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.573 "is_configured": true, 00:16:30.573 "data_offset": 256, 00:16:30.573 "data_size": 7936 00:16:30.573 } 00:16:30.573 ] 00:16:30.573 } 00:16:30.573 } 00:16:30.573 }' 00:16:30.573 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.573 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:30.573 pt2' 00:16:30.573 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.834 [2024-11-28 02:32:04.370168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=33953730-3347-4a89-b3c0-bf73738b8d47 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 33953730-3347-4a89-b3c0-bf73738b8d47 ']' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.834 [2024-11-28 02:32:04.413819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.834 [2024-11-28 02:32:04.413877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.834 [2024-11-28 02:32:04.413985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.834 [2024-11-28 02:32:04.414061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.834 [2024-11-28 02:32:04.414106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:30.834 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.095 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:31.095 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.096 [2024-11-28 02:32:04.533648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:31.096 [2024-11-28 02:32:04.535409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:31.096 [2024-11-28 02:32:04.535513] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:31.096 [2024-11-28 02:32:04.535598] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:31.096 [2024-11-28 02:32:04.535636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.096 [2024-11-28 02:32:04.535673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:31.096 request: 00:16:31.096 { 00:16:31.096 "name": "raid_bdev1", 00:16:31.096 "raid_level": "raid1", 00:16:31.096 "base_bdevs": [ 00:16:31.096 "malloc1", 00:16:31.096 "malloc2" 00:16:31.096 ], 00:16:31.096 "superblock": false, 00:16:31.096 "method": "bdev_raid_create", 00:16:31.096 "req_id": 1 00:16:31.096 } 00:16:31.096 Got JSON-RPC error response 00:16:31.096 response: 00:16:31.096 { 00:16:31.096 "code": -17, 00:16:31.096 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:31.096 } 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.096 [2024-11-28 02:32:04.601513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:31.096 [2024-11-28 02:32:04.601596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.096 [2024-11-28 02:32:04.601629] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:31.096 [2024-11-28 02:32:04.601655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.096 [2024-11-28 02:32:04.603794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.096 [2024-11-28 02:32:04.603882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:31.096 [2024-11-28 02:32:04.604020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:31.096 [2024-11-28 02:32:04.604123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.096 pt1 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.096 "name": "raid_bdev1", 00:16:31.096 "uuid": "33953730-3347-4a89-b3c0-bf73738b8d47", 00:16:31.096 "strip_size_kb": 0, 00:16:31.096 "state": "configuring", 00:16:31.096 "raid_level": "raid1", 00:16:31.096 "superblock": true, 00:16:31.096 "num_base_bdevs": 2, 00:16:31.096 "num_base_bdevs_discovered": 1, 00:16:31.096 "num_base_bdevs_operational": 2, 00:16:31.096 "base_bdevs_list": [ 00:16:31.096 { 00:16:31.096 "name": "pt1", 00:16:31.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.096 "is_configured": true, 00:16:31.096 "data_offset": 256, 00:16:31.096 "data_size": 7936 00:16:31.096 }, 00:16:31.096 { 00:16:31.096 "name": null, 00:16:31.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.096 "is_configured": false, 00:16:31.096 "data_offset": 256, 00:16:31.096 "data_size": 7936 00:16:31.096 } 00:16:31.096 ] 00:16:31.096 }' 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.096 02:32:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.356 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:31.356 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:31.356 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:31.356 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:31.356 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.356 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.357 [2024-11-28 02:32:05.016837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:31.357 [2024-11-28 02:32:05.016941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.357 [2024-11-28 02:32:05.016979] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:31.357 [2024-11-28 02:32:05.017008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.357 [2024-11-28 02:32:05.017429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.357 [2024-11-28 02:32:05.017487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:31.357 [2024-11-28 02:32:05.017576] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:31.357 [2024-11-28 02:32:05.017626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.357 [2024-11-28 02:32:05.017756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:31.357 [2024-11-28 02:32:05.017795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:31.357 [2024-11-28 02:32:05.018049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:31.357 [2024-11-28 02:32:05.018245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:31.357 [2024-11-28 02:32:05.018282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:31.357 [2024-11-28 02:32:05.018453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.357 pt2 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.357 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.624 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.624 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.624 "name": "raid_bdev1", 00:16:31.624 "uuid": "33953730-3347-4a89-b3c0-bf73738b8d47", 00:16:31.624 "strip_size_kb": 0, 00:16:31.624 "state": "online", 00:16:31.624 "raid_level": "raid1", 00:16:31.624 "superblock": true, 00:16:31.624 "num_base_bdevs": 2, 00:16:31.624 "num_base_bdevs_discovered": 2, 00:16:31.624 "num_base_bdevs_operational": 2, 00:16:31.624 "base_bdevs_list": [ 00:16:31.625 { 00:16:31.625 "name": "pt1", 00:16:31.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.625 "is_configured": true, 00:16:31.625 "data_offset": 256, 00:16:31.625 "data_size": 7936 00:16:31.625 }, 00:16:31.625 { 00:16:31.625 "name": "pt2", 00:16:31.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.625 "is_configured": true, 00:16:31.625 "data_offset": 256, 00:16:31.625 "data_size": 7936 00:16:31.625 } 00:16:31.625 ] 00:16:31.625 }' 00:16:31.625 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.625 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.889 [2024-11-28 02:32:05.460290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.889 "name": "raid_bdev1", 00:16:31.889 "aliases": [ 00:16:31.889 "33953730-3347-4a89-b3c0-bf73738b8d47" 00:16:31.889 ], 00:16:31.889 "product_name": "Raid Volume", 00:16:31.889 "block_size": 4096, 00:16:31.889 "num_blocks": 7936, 00:16:31.889 "uuid": "33953730-3347-4a89-b3c0-bf73738b8d47", 00:16:31.889 "assigned_rate_limits": { 00:16:31.889 "rw_ios_per_sec": 0, 00:16:31.889 "rw_mbytes_per_sec": 0, 00:16:31.889 "r_mbytes_per_sec": 0, 00:16:31.889 "w_mbytes_per_sec": 0 00:16:31.889 }, 00:16:31.889 "claimed": false, 00:16:31.889 "zoned": false, 00:16:31.889 "supported_io_types": { 00:16:31.889 "read": true, 00:16:31.889 "write": true, 00:16:31.889 "unmap": false, 00:16:31.889 "flush": false, 00:16:31.889 "reset": true, 00:16:31.889 "nvme_admin": false, 00:16:31.889 "nvme_io": false, 00:16:31.889 "nvme_io_md": false, 00:16:31.889 "write_zeroes": true, 00:16:31.889 "zcopy": false, 00:16:31.889 "get_zone_info": false, 00:16:31.889 "zone_management": false, 00:16:31.889 "zone_append": false, 00:16:31.889 "compare": false, 00:16:31.889 "compare_and_write": false, 00:16:31.889 "abort": false, 00:16:31.889 "seek_hole": false, 00:16:31.889 "seek_data": false, 00:16:31.889 "copy": false, 00:16:31.889 "nvme_iov_md": false 00:16:31.889 }, 00:16:31.889 "memory_domains": [ 00:16:31.889 { 00:16:31.889 "dma_device_id": "system", 00:16:31.889 "dma_device_type": 1 00:16:31.889 }, 00:16:31.889 { 00:16:31.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.889 "dma_device_type": 2 00:16:31.889 }, 00:16:31.889 { 00:16:31.889 "dma_device_id": "system", 00:16:31.889 "dma_device_type": 1 00:16:31.889 }, 00:16:31.889 { 00:16:31.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.889 "dma_device_type": 2 00:16:31.889 } 00:16:31.889 ], 00:16:31.889 "driver_specific": { 00:16:31.889 "raid": { 00:16:31.889 "uuid": "33953730-3347-4a89-b3c0-bf73738b8d47", 00:16:31.889 "strip_size_kb": 0, 00:16:31.889 "state": "online", 00:16:31.889 "raid_level": "raid1", 00:16:31.889 "superblock": true, 00:16:31.889 "num_base_bdevs": 2, 00:16:31.889 "num_base_bdevs_discovered": 2, 00:16:31.889 "num_base_bdevs_operational": 2, 00:16:31.889 "base_bdevs_list": [ 00:16:31.889 { 00:16:31.889 "name": "pt1", 00:16:31.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.889 "is_configured": true, 00:16:31.889 "data_offset": 256, 00:16:31.889 "data_size": 7936 00:16:31.889 }, 00:16:31.889 { 00:16:31.889 "name": "pt2", 00:16:31.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.889 "is_configured": true, 00:16:31.889 "data_offset": 256, 00:16:31.889 "data_size": 7936 00:16:31.889 } 00:16:31.889 ] 00:16:31.889 } 00:16:31.889 } 00:16:31.889 }' 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:31.889 pt2' 00:16:31.889 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.148 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:32.148 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.148 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.148 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:32.148 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.148 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.148 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.149 [2024-11-28 02:32:05.659957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 33953730-3347-4a89-b3c0-bf73738b8d47 '!=' 33953730-3347-4a89-b3c0-bf73738b8d47 ']' 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.149 [2024-11-28 02:32:05.703674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.149 "name": "raid_bdev1", 00:16:32.149 "uuid": "33953730-3347-4a89-b3c0-bf73738b8d47", 00:16:32.149 "strip_size_kb": 0, 00:16:32.149 "state": "online", 00:16:32.149 "raid_level": "raid1", 00:16:32.149 "superblock": true, 00:16:32.149 "num_base_bdevs": 2, 00:16:32.149 "num_base_bdevs_discovered": 1, 00:16:32.149 "num_base_bdevs_operational": 1, 00:16:32.149 "base_bdevs_list": [ 00:16:32.149 { 00:16:32.149 "name": null, 00:16:32.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.149 "is_configured": false, 00:16:32.149 "data_offset": 0, 00:16:32.149 "data_size": 7936 00:16:32.149 }, 00:16:32.149 { 00:16:32.149 "name": "pt2", 00:16:32.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.149 "is_configured": true, 00:16:32.149 "data_offset": 256, 00:16:32.149 "data_size": 7936 00:16:32.149 } 00:16:32.149 ] 00:16:32.149 }' 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.149 02:32:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.718 [2024-11-28 02:32:06.146892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.718 [2024-11-28 02:32:06.146985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.718 [2024-11-28 02:32:06.147054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.718 [2024-11-28 02:32:06.147098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.718 [2024-11-28 02:32:06.147110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.718 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.719 [2024-11-28 02:32:06.218764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.719 [2024-11-28 02:32:06.218853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.719 [2024-11-28 02:32:06.218889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:32.719 [2024-11-28 02:32:06.218931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.719 [2024-11-28 02:32:06.221014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.719 [2024-11-28 02:32:06.221084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.719 [2024-11-28 02:32:06.221176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:32.719 [2024-11-28 02:32:06.221246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.719 [2024-11-28 02:32:06.221371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:32.719 [2024-11-28 02:32:06.221410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:32.719 [2024-11-28 02:32:06.221650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:32.719 [2024-11-28 02:32:06.221826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:32.719 [2024-11-28 02:32:06.221866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:32.719 [2024-11-28 02:32:06.222054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.719 pt2 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.719 "name": "raid_bdev1", 00:16:32.719 "uuid": "33953730-3347-4a89-b3c0-bf73738b8d47", 00:16:32.719 "strip_size_kb": 0, 00:16:32.719 "state": "online", 00:16:32.719 "raid_level": "raid1", 00:16:32.719 "superblock": true, 00:16:32.719 "num_base_bdevs": 2, 00:16:32.719 "num_base_bdevs_discovered": 1, 00:16:32.719 "num_base_bdevs_operational": 1, 00:16:32.719 "base_bdevs_list": [ 00:16:32.719 { 00:16:32.719 "name": null, 00:16:32.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.719 "is_configured": false, 00:16:32.719 "data_offset": 256, 00:16:32.719 "data_size": 7936 00:16:32.719 }, 00:16:32.719 { 00:16:32.719 "name": "pt2", 00:16:32.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.719 "is_configured": true, 00:16:32.719 "data_offset": 256, 00:16:32.719 "data_size": 7936 00:16:32.719 } 00:16:32.719 ] 00:16:32.719 }' 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.719 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.978 [2024-11-28 02:32:06.594068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.978 [2024-11-28 02:32:06.594130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.978 [2024-11-28 02:32:06.594196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.978 [2024-11-28 02:32:06.594249] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.978 [2024-11-28 02:32:06.594280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.978 [2024-11-28 02:32:06.638019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.978 [2024-11-28 02:32:06.638098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.978 [2024-11-28 02:32:06.638130] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:32.978 [2024-11-28 02:32:06.638156] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.978 [2024-11-28 02:32:06.640182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.978 [2024-11-28 02:32:06.640249] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.978 [2024-11-28 02:32:06.640340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:32.978 [2024-11-28 02:32:06.640404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.978 [2024-11-28 02:32:06.640616] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:32.978 [2024-11-28 02:32:06.640669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.978 [2024-11-28 02:32:06.640705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:32.978 [2024-11-28 02:32:06.640789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.978 [2024-11-28 02:32:06.640882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:32.978 [2024-11-28 02:32:06.640916] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:32.978 [2024-11-28 02:32:06.641163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:32.978 [2024-11-28 02:32:06.641344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:32.978 [2024-11-28 02:32:06.641389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:32.978 [2024-11-28 02:32:06.641559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.978 pt1 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.978 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:33.237 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.237 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.237 "name": "raid_bdev1", 00:16:33.237 "uuid": "33953730-3347-4a89-b3c0-bf73738b8d47", 00:16:33.237 "strip_size_kb": 0, 00:16:33.237 "state": "online", 00:16:33.237 "raid_level": "raid1", 00:16:33.237 "superblock": true, 00:16:33.238 "num_base_bdevs": 2, 00:16:33.238 "num_base_bdevs_discovered": 1, 00:16:33.238 "num_base_bdevs_operational": 1, 00:16:33.238 "base_bdevs_list": [ 00:16:33.238 { 00:16:33.238 "name": null, 00:16:33.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.238 "is_configured": false, 00:16:33.238 "data_offset": 256, 00:16:33.238 "data_size": 7936 00:16:33.238 }, 00:16:33.238 { 00:16:33.238 "name": "pt2", 00:16:33.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.238 "is_configured": true, 00:16:33.238 "data_offset": 256, 00:16:33.238 "data_size": 7936 00:16:33.238 } 00:16:33.238 ] 00:16:33.238 }' 00:16:33.238 02:32:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.238 02:32:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:33.497 [2024-11-28 02:32:07.133395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 33953730-3347-4a89-b3c0-bf73738b8d47 '!=' 33953730-3347-4a89-b3c0-bf73738b8d47 ']' 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85914 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85914 ']' 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85914 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.497 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85914 00:16:33.757 killing process with pid 85914 00:16:33.757 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.757 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.757 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85914' 00:16:33.757 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85914 00:16:33.757 [2024-11-28 02:32:07.182311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.757 [2024-11-28 02:32:07.182384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.757 [2024-11-28 02:32:07.182428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.757 [2024-11-28 02:32:07.182441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:33.757 02:32:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85914 00:16:33.757 [2024-11-28 02:32:07.374065] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.137 ************************************ 00:16:35.137 END TEST raid_superblock_test_4k 00:16:35.137 ************************************ 00:16:35.137 02:32:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:35.137 00:16:35.137 real 0m5.744s 00:16:35.137 user 0m8.692s 00:16:35.137 sys 0m1.000s 00:16:35.137 02:32:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.137 02:32:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.137 02:32:08 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:35.137 02:32:08 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:35.137 02:32:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:35.137 02:32:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.137 02:32:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.137 ************************************ 00:16:35.137 START TEST raid_rebuild_test_sb_4k 00:16:35.137 ************************************ 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86237 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86237 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86237 ']' 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.137 02:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.137 [2024-11-28 02:32:08.599523] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:35.137 [2024-11-28 02:32:08.599720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:35.137 Zero copy mechanism will not be used. 00:16:35.137 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86237 ] 00:16:35.137 [2024-11-28 02:32:08.770586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.397 [2024-11-28 02:32:08.868692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.397 [2024-11-28 02:32:09.061017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.397 [2024-11-28 02:32:09.061099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.968 BaseBdev1_malloc 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.968 [2024-11-28 02:32:09.475841] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:35.968 [2024-11-28 02:32:09.475966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.968 [2024-11-28 02:32:09.476005] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:35.968 [2024-11-28 02:32:09.476034] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.968 [2024-11-28 02:32:09.478044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.968 [2024-11-28 02:32:09.478116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:35.968 BaseBdev1 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.968 BaseBdev2_malloc 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.968 [2024-11-28 02:32:09.528020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:35.968 [2024-11-28 02:32:09.528125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.968 [2024-11-28 02:32:09.528162] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:35.968 [2024-11-28 02:32:09.528190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.968 [2024-11-28 02:32:09.530171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.968 [2024-11-28 02:32:09.530251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:35.968 BaseBdev2 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.968 spare_malloc 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.968 spare_delay 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.968 [2024-11-28 02:32:09.626116] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.968 [2024-11-28 02:32:09.626205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.968 [2024-11-28 02:32:09.626240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:35.968 [2024-11-28 02:32:09.626268] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.968 [2024-11-28 02:32:09.628289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.968 [2024-11-28 02:32:09.628359] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.968 spare 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.968 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.968 [2024-11-28 02:32:09.638152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.968 [2024-11-28 02:32:09.639883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.968 [2024-11-28 02:32:09.640139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:35.968 [2024-11-28 02:32:09.640188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:35.968 [2024-11-28 02:32:09.640429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:35.968 [2024-11-28 02:32:09.640625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:35.968 [2024-11-28 02:32:09.640637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:35.968 [2024-11-28 02:32:09.640765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.228 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.228 "name": "raid_bdev1", 00:16:36.228 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:36.228 "strip_size_kb": 0, 00:16:36.228 "state": "online", 00:16:36.229 "raid_level": "raid1", 00:16:36.229 "superblock": true, 00:16:36.229 "num_base_bdevs": 2, 00:16:36.229 "num_base_bdevs_discovered": 2, 00:16:36.229 "num_base_bdevs_operational": 2, 00:16:36.229 "base_bdevs_list": [ 00:16:36.229 { 00:16:36.229 "name": "BaseBdev1", 00:16:36.229 "uuid": "f3319015-1234-5456-9234-79cee2d213de", 00:16:36.229 "is_configured": true, 00:16:36.229 "data_offset": 256, 00:16:36.229 "data_size": 7936 00:16:36.229 }, 00:16:36.229 { 00:16:36.229 "name": "BaseBdev2", 00:16:36.229 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:36.229 "is_configured": true, 00:16:36.229 "data_offset": 256, 00:16:36.229 "data_size": 7936 00:16:36.229 } 00:16:36.229 ] 00:16:36.229 }' 00:16:36.229 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.229 02:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.488 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.488 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:36.488 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.488 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.488 [2024-11-28 02:32:10.089600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.488 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.488 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:36.488 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.488 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:36.488 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.488 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.489 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:36.749 [2024-11-28 02:32:10.364991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:36.749 /dev/nbd0 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.749 1+0 records in 00:16:36.749 1+0 records out 00:16:36.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372524 s, 11.0 MB/s 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:36.749 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.009 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:37.009 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:37.009 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.009 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.009 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:37.009 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:37.009 02:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:37.579 7936+0 records in 00:16:37.579 7936+0 records out 00:16:37.579 32505856 bytes (33 MB, 31 MiB) copied, 0.576341 s, 56.4 MB/s 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:37.579 [2024-11-28 02:32:11.213996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.579 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.580 [2024-11-28 02:32:11.227207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.580 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.840 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.840 "name": "raid_bdev1", 00:16:37.840 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:37.840 "strip_size_kb": 0, 00:16:37.840 "state": "online", 00:16:37.840 "raid_level": "raid1", 00:16:37.840 "superblock": true, 00:16:37.840 "num_base_bdevs": 2, 00:16:37.840 "num_base_bdevs_discovered": 1, 00:16:37.840 "num_base_bdevs_operational": 1, 00:16:37.840 "base_bdevs_list": [ 00:16:37.840 { 00:16:37.840 "name": null, 00:16:37.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.840 "is_configured": false, 00:16:37.840 "data_offset": 0, 00:16:37.840 "data_size": 7936 00:16:37.840 }, 00:16:37.840 { 00:16:37.840 "name": "BaseBdev2", 00:16:37.840 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:37.840 "is_configured": true, 00:16:37.840 "data_offset": 256, 00:16:37.840 "data_size": 7936 00:16:37.840 } 00:16:37.840 ] 00:16:37.840 }' 00:16:37.840 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.840 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.099 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:38.099 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.099 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.099 [2024-11-28 02:32:11.686429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.099 [2024-11-28 02:32:11.703416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:16:38.099 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.099 02:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:38.099 [2024-11-28 02:32:11.705205] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.064 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.064 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.064 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.064 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.064 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.064 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.064 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.064 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.064 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.064 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.324 "name": "raid_bdev1", 00:16:39.324 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:39.324 "strip_size_kb": 0, 00:16:39.324 "state": "online", 00:16:39.324 "raid_level": "raid1", 00:16:39.324 "superblock": true, 00:16:39.324 "num_base_bdevs": 2, 00:16:39.324 "num_base_bdevs_discovered": 2, 00:16:39.324 "num_base_bdevs_operational": 2, 00:16:39.324 "process": { 00:16:39.324 "type": "rebuild", 00:16:39.324 "target": "spare", 00:16:39.324 "progress": { 00:16:39.324 "blocks": 2560, 00:16:39.324 "percent": 32 00:16:39.324 } 00:16:39.324 }, 00:16:39.324 "base_bdevs_list": [ 00:16:39.324 { 00:16:39.324 "name": "spare", 00:16:39.324 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:39.324 "is_configured": true, 00:16:39.324 "data_offset": 256, 00:16:39.324 "data_size": 7936 00:16:39.324 }, 00:16:39.324 { 00:16:39.324 "name": "BaseBdev2", 00:16:39.324 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:39.324 "is_configured": true, 00:16:39.324 "data_offset": 256, 00:16:39.324 "data_size": 7936 00:16:39.324 } 00:16:39.324 ] 00:16:39.324 }' 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.324 [2024-11-28 02:32:12.864713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.324 [2024-11-28 02:32:12.909631] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:39.324 [2024-11-28 02:32:12.909687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.324 [2024-11-28 02:32:12.909700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.324 [2024-11-28 02:32:12.909708] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.324 "name": "raid_bdev1", 00:16:39.324 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:39.324 "strip_size_kb": 0, 00:16:39.324 "state": "online", 00:16:39.324 "raid_level": "raid1", 00:16:39.324 "superblock": true, 00:16:39.324 "num_base_bdevs": 2, 00:16:39.324 "num_base_bdevs_discovered": 1, 00:16:39.324 "num_base_bdevs_operational": 1, 00:16:39.324 "base_bdevs_list": [ 00:16:39.324 { 00:16:39.324 "name": null, 00:16:39.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.324 "is_configured": false, 00:16:39.324 "data_offset": 0, 00:16:39.324 "data_size": 7936 00:16:39.324 }, 00:16:39.324 { 00:16:39.324 "name": "BaseBdev2", 00:16:39.324 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:39.324 "is_configured": true, 00:16:39.324 "data_offset": 256, 00:16:39.324 "data_size": 7936 00:16:39.324 } 00:16:39.324 ] 00:16:39.324 }' 00:16:39.324 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.325 02:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.894 "name": "raid_bdev1", 00:16:39.894 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:39.894 "strip_size_kb": 0, 00:16:39.894 "state": "online", 00:16:39.894 "raid_level": "raid1", 00:16:39.894 "superblock": true, 00:16:39.894 "num_base_bdevs": 2, 00:16:39.894 "num_base_bdevs_discovered": 1, 00:16:39.894 "num_base_bdevs_operational": 1, 00:16:39.894 "base_bdevs_list": [ 00:16:39.894 { 00:16:39.894 "name": null, 00:16:39.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.894 "is_configured": false, 00:16:39.894 "data_offset": 0, 00:16:39.894 "data_size": 7936 00:16:39.894 }, 00:16:39.894 { 00:16:39.894 "name": "BaseBdev2", 00:16:39.894 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:39.894 "is_configured": true, 00:16:39.894 "data_offset": 256, 00:16:39.894 "data_size": 7936 00:16:39.894 } 00:16:39.894 ] 00:16:39.894 }' 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.894 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.894 [2024-11-28 02:32:13.565851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.154 [2024-11-28 02:32:13.581814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:16:40.154 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.154 02:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:40.155 [2024-11-28 02:32:13.583680] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.095 "name": "raid_bdev1", 00:16:41.095 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:41.095 "strip_size_kb": 0, 00:16:41.095 "state": "online", 00:16:41.095 "raid_level": "raid1", 00:16:41.095 "superblock": true, 00:16:41.095 "num_base_bdevs": 2, 00:16:41.095 "num_base_bdevs_discovered": 2, 00:16:41.095 "num_base_bdevs_operational": 2, 00:16:41.095 "process": { 00:16:41.095 "type": "rebuild", 00:16:41.095 "target": "spare", 00:16:41.095 "progress": { 00:16:41.095 "blocks": 2560, 00:16:41.095 "percent": 32 00:16:41.095 } 00:16:41.095 }, 00:16:41.095 "base_bdevs_list": [ 00:16:41.095 { 00:16:41.095 "name": "spare", 00:16:41.095 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:41.095 "is_configured": true, 00:16:41.095 "data_offset": 256, 00:16:41.095 "data_size": 7936 00:16:41.095 }, 00:16:41.095 { 00:16:41.095 "name": "BaseBdev2", 00:16:41.095 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:41.095 "is_configured": true, 00:16:41.095 "data_offset": 256, 00:16:41.095 "data_size": 7936 00:16:41.095 } 00:16:41.095 ] 00:16:41.095 }' 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:41.095 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=664 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.095 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.355 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.355 "name": "raid_bdev1", 00:16:41.355 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:41.355 "strip_size_kb": 0, 00:16:41.355 "state": "online", 00:16:41.355 "raid_level": "raid1", 00:16:41.355 "superblock": true, 00:16:41.355 "num_base_bdevs": 2, 00:16:41.355 "num_base_bdevs_discovered": 2, 00:16:41.355 "num_base_bdevs_operational": 2, 00:16:41.355 "process": { 00:16:41.355 "type": "rebuild", 00:16:41.355 "target": "spare", 00:16:41.355 "progress": { 00:16:41.355 "blocks": 2816, 00:16:41.355 "percent": 35 00:16:41.355 } 00:16:41.355 }, 00:16:41.355 "base_bdevs_list": [ 00:16:41.355 { 00:16:41.355 "name": "spare", 00:16:41.355 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:41.355 "is_configured": true, 00:16:41.355 "data_offset": 256, 00:16:41.355 "data_size": 7936 00:16:41.355 }, 00:16:41.355 { 00:16:41.355 "name": "BaseBdev2", 00:16:41.355 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:41.355 "is_configured": true, 00:16:41.355 "data_offset": 256, 00:16:41.355 "data_size": 7936 00:16:41.355 } 00:16:41.355 ] 00:16:41.355 }' 00:16:41.355 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.355 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.355 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.355 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.355 02:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.291 "name": "raid_bdev1", 00:16:42.291 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:42.291 "strip_size_kb": 0, 00:16:42.291 "state": "online", 00:16:42.291 "raid_level": "raid1", 00:16:42.291 "superblock": true, 00:16:42.291 "num_base_bdevs": 2, 00:16:42.291 "num_base_bdevs_discovered": 2, 00:16:42.291 "num_base_bdevs_operational": 2, 00:16:42.291 "process": { 00:16:42.291 "type": "rebuild", 00:16:42.291 "target": "spare", 00:16:42.291 "progress": { 00:16:42.291 "blocks": 5632, 00:16:42.291 "percent": 70 00:16:42.291 } 00:16:42.291 }, 00:16:42.291 "base_bdevs_list": [ 00:16:42.291 { 00:16:42.291 "name": "spare", 00:16:42.291 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:42.291 "is_configured": true, 00:16:42.291 "data_offset": 256, 00:16:42.291 "data_size": 7936 00:16:42.291 }, 00:16:42.291 { 00:16:42.291 "name": "BaseBdev2", 00:16:42.291 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:42.291 "is_configured": true, 00:16:42.291 "data_offset": 256, 00:16:42.291 "data_size": 7936 00:16:42.291 } 00:16:42.291 ] 00:16:42.291 }' 00:16:42.291 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.550 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.550 02:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.550 02:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.550 02:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.118 [2024-11-28 02:32:16.694916] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:43.118 [2024-11-28 02:32:16.694983] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:43.118 [2024-11-28 02:32:16.695071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.378 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.639 "name": "raid_bdev1", 00:16:43.639 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:43.639 "strip_size_kb": 0, 00:16:43.639 "state": "online", 00:16:43.639 "raid_level": "raid1", 00:16:43.639 "superblock": true, 00:16:43.639 "num_base_bdevs": 2, 00:16:43.639 "num_base_bdevs_discovered": 2, 00:16:43.639 "num_base_bdevs_operational": 2, 00:16:43.639 "base_bdevs_list": [ 00:16:43.639 { 00:16:43.639 "name": "spare", 00:16:43.639 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:43.639 "is_configured": true, 00:16:43.639 "data_offset": 256, 00:16:43.639 "data_size": 7936 00:16:43.639 }, 00:16:43.639 { 00:16:43.639 "name": "BaseBdev2", 00:16:43.639 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:43.639 "is_configured": true, 00:16:43.639 "data_offset": 256, 00:16:43.639 "data_size": 7936 00:16:43.639 } 00:16:43.639 ] 00:16:43.639 }' 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.639 "name": "raid_bdev1", 00:16:43.639 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:43.639 "strip_size_kb": 0, 00:16:43.639 "state": "online", 00:16:43.639 "raid_level": "raid1", 00:16:43.639 "superblock": true, 00:16:43.639 "num_base_bdevs": 2, 00:16:43.639 "num_base_bdevs_discovered": 2, 00:16:43.639 "num_base_bdevs_operational": 2, 00:16:43.639 "base_bdevs_list": [ 00:16:43.639 { 00:16:43.639 "name": "spare", 00:16:43.639 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:43.639 "is_configured": true, 00:16:43.639 "data_offset": 256, 00:16:43.639 "data_size": 7936 00:16:43.639 }, 00:16:43.639 { 00:16:43.639 "name": "BaseBdev2", 00:16:43.639 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:43.639 "is_configured": true, 00:16:43.639 "data_offset": 256, 00:16:43.639 "data_size": 7936 00:16:43.639 } 00:16:43.639 ] 00:16:43.639 }' 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.639 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.900 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.900 "name": "raid_bdev1", 00:16:43.900 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:43.900 "strip_size_kb": 0, 00:16:43.900 "state": "online", 00:16:43.900 "raid_level": "raid1", 00:16:43.900 "superblock": true, 00:16:43.900 "num_base_bdevs": 2, 00:16:43.900 "num_base_bdevs_discovered": 2, 00:16:43.900 "num_base_bdevs_operational": 2, 00:16:43.900 "base_bdevs_list": [ 00:16:43.900 { 00:16:43.900 "name": "spare", 00:16:43.900 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:43.900 "is_configured": true, 00:16:43.900 "data_offset": 256, 00:16:43.900 "data_size": 7936 00:16:43.900 }, 00:16:43.900 { 00:16:43.900 "name": "BaseBdev2", 00:16:43.900 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:43.900 "is_configured": true, 00:16:43.900 "data_offset": 256, 00:16:43.900 "data_size": 7936 00:16:43.900 } 00:16:43.900 ] 00:16:43.900 }' 00:16:43.900 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.900 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.160 [2024-11-28 02:32:17.746342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.160 [2024-11-28 02:32:17.746411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.160 [2024-11-28 02:32:17.746523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.160 [2024-11-28 02:32:17.746589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.160 [2024-11-28 02:32:17.746600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.160 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:44.420 /dev/nbd0 00:16:44.420 02:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.420 1+0 records in 00:16:44.420 1+0 records out 00:16:44.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583222 s, 7.0 MB/s 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.420 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:44.680 /dev/nbd1 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.680 1+0 records in 00:16:44.680 1+0 records out 00:16:44.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206553 s, 19.8 MB/s 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.680 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:44.940 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:44.940 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.940 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.940 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:44.940 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:44.940 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.940 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.200 [2024-11-28 02:32:18.862491] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.200 [2024-11-28 02:32:18.862592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.200 [2024-11-28 02:32:18.862634] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:45.200 [2024-11-28 02:32:18.862663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.200 [2024-11-28 02:32:18.864900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.200 [2024-11-28 02:32:18.864989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.200 [2024-11-28 02:32:18.865113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:45.200 [2024-11-28 02:32:18.865202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.200 [2024-11-28 02:32:18.865385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.200 spare 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.200 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.460 [2024-11-28 02:32:18.965323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:45.460 [2024-11-28 02:32:18.965383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:45.460 [2024-11-28 02:32:18.965671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:45.460 [2024-11-28 02:32:18.965874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:45.460 [2024-11-28 02:32:18.965938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:45.460 [2024-11-28 02:32:18.966131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.460 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.461 02:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.461 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.461 "name": "raid_bdev1", 00:16:45.461 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:45.461 "strip_size_kb": 0, 00:16:45.461 "state": "online", 00:16:45.461 "raid_level": "raid1", 00:16:45.461 "superblock": true, 00:16:45.461 "num_base_bdevs": 2, 00:16:45.461 "num_base_bdevs_discovered": 2, 00:16:45.461 "num_base_bdevs_operational": 2, 00:16:45.461 "base_bdevs_list": [ 00:16:45.461 { 00:16:45.461 "name": "spare", 00:16:45.461 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:45.461 "is_configured": true, 00:16:45.461 "data_offset": 256, 00:16:45.461 "data_size": 7936 00:16:45.461 }, 00:16:45.461 { 00:16:45.461 "name": "BaseBdev2", 00:16:45.461 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:45.461 "is_configured": true, 00:16:45.461 "data_offset": 256, 00:16:45.461 "data_size": 7936 00:16:45.461 } 00:16:45.461 ] 00:16:45.461 }' 00:16:45.461 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.461 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.031 "name": "raid_bdev1", 00:16:46.031 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:46.031 "strip_size_kb": 0, 00:16:46.031 "state": "online", 00:16:46.031 "raid_level": "raid1", 00:16:46.031 "superblock": true, 00:16:46.031 "num_base_bdevs": 2, 00:16:46.031 "num_base_bdevs_discovered": 2, 00:16:46.031 "num_base_bdevs_operational": 2, 00:16:46.031 "base_bdevs_list": [ 00:16:46.031 { 00:16:46.031 "name": "spare", 00:16:46.031 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:46.031 "is_configured": true, 00:16:46.031 "data_offset": 256, 00:16:46.031 "data_size": 7936 00:16:46.031 }, 00:16:46.031 { 00:16:46.031 "name": "BaseBdev2", 00:16:46.031 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:46.031 "is_configured": true, 00:16:46.031 "data_offset": 256, 00:16:46.031 "data_size": 7936 00:16:46.031 } 00:16:46.031 ] 00:16:46.031 }' 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.031 [2024-11-28 02:32:19.601258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.031 "name": "raid_bdev1", 00:16:46.031 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:46.031 "strip_size_kb": 0, 00:16:46.031 "state": "online", 00:16:46.031 "raid_level": "raid1", 00:16:46.031 "superblock": true, 00:16:46.031 "num_base_bdevs": 2, 00:16:46.031 "num_base_bdevs_discovered": 1, 00:16:46.031 "num_base_bdevs_operational": 1, 00:16:46.031 "base_bdevs_list": [ 00:16:46.031 { 00:16:46.031 "name": null, 00:16:46.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.031 "is_configured": false, 00:16:46.031 "data_offset": 0, 00:16:46.031 "data_size": 7936 00:16:46.031 }, 00:16:46.031 { 00:16:46.031 "name": "BaseBdev2", 00:16:46.031 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:46.031 "is_configured": true, 00:16:46.031 "data_offset": 256, 00:16:46.031 "data_size": 7936 00:16:46.031 } 00:16:46.031 ] 00:16:46.031 }' 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.031 02:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.601 02:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:46.601 02:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.601 02:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.601 [2024-11-28 02:32:20.044489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.601 [2024-11-28 02:32:20.044706] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:46.601 [2024-11-28 02:32:20.044782] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:46.601 [2024-11-28 02:32:20.044836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.601 [2024-11-28 02:32:20.059861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:16:46.601 02:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.601 02:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:46.601 [2024-11-28 02:32:20.061678] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.541 "name": "raid_bdev1", 00:16:47.541 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:47.541 "strip_size_kb": 0, 00:16:47.541 "state": "online", 00:16:47.541 "raid_level": "raid1", 00:16:47.541 "superblock": true, 00:16:47.541 "num_base_bdevs": 2, 00:16:47.541 "num_base_bdevs_discovered": 2, 00:16:47.541 "num_base_bdevs_operational": 2, 00:16:47.541 "process": { 00:16:47.541 "type": "rebuild", 00:16:47.541 "target": "spare", 00:16:47.541 "progress": { 00:16:47.541 "blocks": 2560, 00:16:47.541 "percent": 32 00:16:47.541 } 00:16:47.541 }, 00:16:47.541 "base_bdevs_list": [ 00:16:47.541 { 00:16:47.541 "name": "spare", 00:16:47.541 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:47.541 "is_configured": true, 00:16:47.541 "data_offset": 256, 00:16:47.541 "data_size": 7936 00:16:47.541 }, 00:16:47.541 { 00:16:47.541 "name": "BaseBdev2", 00:16:47.541 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:47.541 "is_configured": true, 00:16:47.541 "data_offset": 256, 00:16:47.541 "data_size": 7936 00:16:47.541 } 00:16:47.541 ] 00:16:47.541 }' 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.541 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.801 [2024-11-28 02:32:21.229212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.801 [2024-11-28 02:32:21.266057] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.801 [2024-11-28 02:32:21.266111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.801 [2024-11-28 02:32:21.266125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.801 [2024-11-28 02:32:21.266134] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.801 "name": "raid_bdev1", 00:16:47.801 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:47.801 "strip_size_kb": 0, 00:16:47.801 "state": "online", 00:16:47.801 "raid_level": "raid1", 00:16:47.801 "superblock": true, 00:16:47.801 "num_base_bdevs": 2, 00:16:47.801 "num_base_bdevs_discovered": 1, 00:16:47.801 "num_base_bdevs_operational": 1, 00:16:47.801 "base_bdevs_list": [ 00:16:47.801 { 00:16:47.801 "name": null, 00:16:47.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.801 "is_configured": false, 00:16:47.801 "data_offset": 0, 00:16:47.801 "data_size": 7936 00:16:47.801 }, 00:16:47.801 { 00:16:47.801 "name": "BaseBdev2", 00:16:47.801 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:47.801 "is_configured": true, 00:16:47.801 "data_offset": 256, 00:16:47.801 "data_size": 7936 00:16:47.801 } 00:16:47.801 ] 00:16:47.801 }' 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.801 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.061 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:48.061 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.061 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.061 [2024-11-28 02:32:21.708123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:48.061 [2024-11-28 02:32:21.708235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.061 [2024-11-28 02:32:21.708273] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:48.061 [2024-11-28 02:32:21.708303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.061 [2024-11-28 02:32:21.708765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.061 [2024-11-28 02:32:21.708829] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:48.061 [2024-11-28 02:32:21.708969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:48.062 [2024-11-28 02:32:21.709015] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:48.062 [2024-11-28 02:32:21.709059] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:48.062 [2024-11-28 02:32:21.709120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.062 [2024-11-28 02:32:21.724330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:16:48.062 spare 00:16:48.062 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.062 [2024-11-28 02:32:21.726138] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.062 02:32:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.478 "name": "raid_bdev1", 00:16:49.478 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:49.478 "strip_size_kb": 0, 00:16:49.478 "state": "online", 00:16:49.478 "raid_level": "raid1", 00:16:49.478 "superblock": true, 00:16:49.478 "num_base_bdevs": 2, 00:16:49.478 "num_base_bdevs_discovered": 2, 00:16:49.478 "num_base_bdevs_operational": 2, 00:16:49.478 "process": { 00:16:49.478 "type": "rebuild", 00:16:49.478 "target": "spare", 00:16:49.478 "progress": { 00:16:49.478 "blocks": 2560, 00:16:49.478 "percent": 32 00:16:49.478 } 00:16:49.478 }, 00:16:49.478 "base_bdevs_list": [ 00:16:49.478 { 00:16:49.478 "name": "spare", 00:16:49.478 "uuid": "4c6277c7-38de-5c22-9dff-f60038df908d", 00:16:49.478 "is_configured": true, 00:16:49.478 "data_offset": 256, 00:16:49.478 "data_size": 7936 00:16:49.478 }, 00:16:49.478 { 00:16:49.478 "name": "BaseBdev2", 00:16:49.478 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:49.478 "is_configured": true, 00:16:49.478 "data_offset": 256, 00:16:49.478 "data_size": 7936 00:16:49.478 } 00:16:49.478 ] 00:16:49.478 }' 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.478 [2024-11-28 02:32:22.885678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.478 [2024-11-28 02:32:22.930710] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:49.478 [2024-11-28 02:32:22.930824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.478 [2024-11-28 02:32:22.930862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.478 [2024-11-28 02:32:22.930883] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.478 02:32:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.478 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.478 "name": "raid_bdev1", 00:16:49.478 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:49.478 "strip_size_kb": 0, 00:16:49.478 "state": "online", 00:16:49.478 "raid_level": "raid1", 00:16:49.478 "superblock": true, 00:16:49.478 "num_base_bdevs": 2, 00:16:49.478 "num_base_bdevs_discovered": 1, 00:16:49.478 "num_base_bdevs_operational": 1, 00:16:49.478 "base_bdevs_list": [ 00:16:49.478 { 00:16:49.478 "name": null, 00:16:49.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.478 "is_configured": false, 00:16:49.478 "data_offset": 0, 00:16:49.478 "data_size": 7936 00:16:49.478 }, 00:16:49.478 { 00:16:49.478 "name": "BaseBdev2", 00:16:49.478 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:49.478 "is_configured": true, 00:16:49.478 "data_offset": 256, 00:16:49.478 "data_size": 7936 00:16:49.478 } 00:16:49.478 ] 00:16:49.478 }' 00:16:49.478 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.478 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.738 "name": "raid_bdev1", 00:16:49.738 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:49.738 "strip_size_kb": 0, 00:16:49.738 "state": "online", 00:16:49.738 "raid_level": "raid1", 00:16:49.738 "superblock": true, 00:16:49.738 "num_base_bdevs": 2, 00:16:49.738 "num_base_bdevs_discovered": 1, 00:16:49.738 "num_base_bdevs_operational": 1, 00:16:49.738 "base_bdevs_list": [ 00:16:49.738 { 00:16:49.738 "name": null, 00:16:49.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.738 "is_configured": false, 00:16:49.738 "data_offset": 0, 00:16:49.738 "data_size": 7936 00:16:49.738 }, 00:16:49.738 { 00:16:49.738 "name": "BaseBdev2", 00:16:49.738 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:49.738 "is_configured": true, 00:16:49.738 "data_offset": 256, 00:16:49.738 "data_size": 7936 00:16:49.738 } 00:16:49.738 ] 00:16:49.738 }' 00:16:49.738 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.998 [2024-11-28 02:32:23.499088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:49.998 [2024-11-28 02:32:23.499175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.998 [2024-11-28 02:32:23.499217] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:49.998 [2024-11-28 02:32:23.499256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.998 [2024-11-28 02:32:23.499706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.998 [2024-11-28 02:32:23.499759] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.998 [2024-11-28 02:32:23.499860] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:49.998 [2024-11-28 02:32:23.499899] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:49.998 [2024-11-28 02:32:23.499955] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:49.998 [2024-11-28 02:32:23.499985] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:49.998 BaseBdev1 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.998 02:32:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.938 "name": "raid_bdev1", 00:16:50.938 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:50.938 "strip_size_kb": 0, 00:16:50.938 "state": "online", 00:16:50.938 "raid_level": "raid1", 00:16:50.938 "superblock": true, 00:16:50.938 "num_base_bdevs": 2, 00:16:50.938 "num_base_bdevs_discovered": 1, 00:16:50.938 "num_base_bdevs_operational": 1, 00:16:50.938 "base_bdevs_list": [ 00:16:50.938 { 00:16:50.938 "name": null, 00:16:50.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.938 "is_configured": false, 00:16:50.938 "data_offset": 0, 00:16:50.938 "data_size": 7936 00:16:50.938 }, 00:16:50.938 { 00:16:50.938 "name": "BaseBdev2", 00:16:50.938 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:50.938 "is_configured": true, 00:16:50.938 "data_offset": 256, 00:16:50.938 "data_size": 7936 00:16:50.938 } 00:16:50.938 ] 00:16:50.938 }' 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.938 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.509 "name": "raid_bdev1", 00:16:51.509 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:51.509 "strip_size_kb": 0, 00:16:51.509 "state": "online", 00:16:51.509 "raid_level": "raid1", 00:16:51.509 "superblock": true, 00:16:51.509 "num_base_bdevs": 2, 00:16:51.509 "num_base_bdevs_discovered": 1, 00:16:51.509 "num_base_bdevs_operational": 1, 00:16:51.509 "base_bdevs_list": [ 00:16:51.509 { 00:16:51.509 "name": null, 00:16:51.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.509 "is_configured": false, 00:16:51.509 "data_offset": 0, 00:16:51.509 "data_size": 7936 00:16:51.509 }, 00:16:51.509 { 00:16:51.509 "name": "BaseBdev2", 00:16:51.509 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:51.509 "is_configured": true, 00:16:51.509 "data_offset": 256, 00:16:51.509 "data_size": 7936 00:16:51.509 } 00:16:51.509 ] 00:16:51.509 }' 00:16:51.509 02:32:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 [2024-11-28 02:32:25.080397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.509 [2024-11-28 02:32:25.080620] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:51.509 [2024-11-28 02:32:25.080682] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:51.509 request: 00:16:51.509 { 00:16:51.509 "base_bdev": "BaseBdev1", 00:16:51.509 "raid_bdev": "raid_bdev1", 00:16:51.509 "method": "bdev_raid_add_base_bdev", 00:16:51.509 "req_id": 1 00:16:51.509 } 00:16:51.509 Got JSON-RPC error response 00:16:51.509 response: 00:16:51.509 { 00:16:51.509 "code": -22, 00:16:51.509 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:51.509 } 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:51.509 02:32:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.448 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.708 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.708 "name": "raid_bdev1", 00:16:52.708 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:52.708 "strip_size_kb": 0, 00:16:52.708 "state": "online", 00:16:52.708 "raid_level": "raid1", 00:16:52.708 "superblock": true, 00:16:52.708 "num_base_bdevs": 2, 00:16:52.708 "num_base_bdevs_discovered": 1, 00:16:52.708 "num_base_bdevs_operational": 1, 00:16:52.708 "base_bdevs_list": [ 00:16:52.708 { 00:16:52.708 "name": null, 00:16:52.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.708 "is_configured": false, 00:16:52.708 "data_offset": 0, 00:16:52.708 "data_size": 7936 00:16:52.708 }, 00:16:52.708 { 00:16:52.708 "name": "BaseBdev2", 00:16:52.708 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:52.708 "is_configured": true, 00:16:52.708 "data_offset": 256, 00:16:52.708 "data_size": 7936 00:16:52.708 } 00:16:52.708 ] 00:16:52.708 }' 00:16:52.708 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.708 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.969 "name": "raid_bdev1", 00:16:52.969 "uuid": "27590490-fbf0-4b64-8c92-91194e377de5", 00:16:52.969 "strip_size_kb": 0, 00:16:52.969 "state": "online", 00:16:52.969 "raid_level": "raid1", 00:16:52.969 "superblock": true, 00:16:52.969 "num_base_bdevs": 2, 00:16:52.969 "num_base_bdevs_discovered": 1, 00:16:52.969 "num_base_bdevs_operational": 1, 00:16:52.969 "base_bdevs_list": [ 00:16:52.969 { 00:16:52.969 "name": null, 00:16:52.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.969 "is_configured": false, 00:16:52.969 "data_offset": 0, 00:16:52.969 "data_size": 7936 00:16:52.969 }, 00:16:52.969 { 00:16:52.969 "name": "BaseBdev2", 00:16:52.969 "uuid": "a45f1a84-010c-5acb-80a1-91859325ac6f", 00:16:52.969 "is_configured": true, 00:16:52.969 "data_offset": 256, 00:16:52.969 "data_size": 7936 00:16:52.969 } 00:16:52.969 ] 00:16:52.969 }' 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.969 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86237 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86237 ']' 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86237 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86237 00:16:53.229 killing process with pid 86237 00:16:53.229 Received shutdown signal, test time was about 60.000000 seconds 00:16:53.229 00:16:53.229 Latency(us) 00:16:53.229 [2024-11-28T02:32:26.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.229 [2024-11-28T02:32:26.908Z] =================================================================================================================== 00:16:53.229 [2024-11-28T02:32:26.908Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86237' 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86237 00:16:53.229 [2024-11-28 02:32:26.713276] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.229 [2024-11-28 02:32:26.713389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.229 [2024-11-28 02:32:26.713438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.229 02:32:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86237 00:16:53.229 [2024-11-28 02:32:26.713448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:53.489 [2024-11-28 02:32:26.987893] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.430 02:32:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:54.430 00:16:54.430 real 0m19.510s 00:16:54.430 user 0m25.581s 00:16:54.430 sys 0m2.452s 00:16:54.430 02:32:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.430 ************************************ 00:16:54.430 END TEST raid_rebuild_test_sb_4k 00:16:54.430 ************************************ 00:16:54.430 02:32:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.430 02:32:28 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:54.430 02:32:28 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:54.430 02:32:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:54.430 02:32:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.430 02:32:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.430 ************************************ 00:16:54.430 START TEST raid_state_function_test_sb_md_separate 00:16:54.430 ************************************ 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86923 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86923' 00:16:54.430 Process raid pid: 86923 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86923 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86923 ']' 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.430 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.690 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.690 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:54.690 [2024-11-28 02:32:28.188963] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:54.690 [2024-11-28 02:32:28.189165] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.690 [2024-11-28 02:32:28.359927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.950 [2024-11-28 02:32:28.471482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.209 [2024-11-28 02:32:28.668464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.209 [2024-11-28 02:32:28.668577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.469 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.469 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:55.469 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:55.469 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.469 02:32:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:55.469 [2024-11-28 02:32:28.999647] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.469 [2024-11-28 02:32:28.999744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.469 [2024-11-28 02:32:28.999789] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.469 [2024-11-28 02:32:28.999812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.469 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.469 "name": "Existed_Raid", 00:16:55.469 "uuid": "d48c94bf-f142-4bf8-95ec-2447a1512003", 00:16:55.469 "strip_size_kb": 0, 00:16:55.469 "state": "configuring", 00:16:55.469 "raid_level": "raid1", 00:16:55.469 "superblock": true, 00:16:55.469 "num_base_bdevs": 2, 00:16:55.469 "num_base_bdevs_discovered": 0, 00:16:55.469 "num_base_bdevs_operational": 2, 00:16:55.469 "base_bdevs_list": [ 00:16:55.469 { 00:16:55.469 "name": "BaseBdev1", 00:16:55.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.469 "is_configured": false, 00:16:55.469 "data_offset": 0, 00:16:55.470 "data_size": 0 00:16:55.470 }, 00:16:55.470 { 00:16:55.470 "name": "BaseBdev2", 00:16:55.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.470 "is_configured": false, 00:16:55.470 "data_offset": 0, 00:16:55.470 "data_size": 0 00:16:55.470 } 00:16:55.470 ] 00:16:55.470 }' 00:16:55.470 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.470 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.038 [2024-11-28 02:32:29.434819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.038 [2024-11-28 02:32:29.434888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.038 [2024-11-28 02:32:29.446805] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.038 [2024-11-28 02:32:29.446879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.038 [2024-11-28 02:32:29.446921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.038 [2024-11-28 02:32:29.446964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.038 [2024-11-28 02:32:29.496022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.038 BaseBdev1 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.038 [ 00:16:56.038 { 00:16:56.038 "name": "BaseBdev1", 00:16:56.038 "aliases": [ 00:16:56.038 "828146e0-eadd-4870-8e2b-ab946d167f55" 00:16:56.038 ], 00:16:56.038 "product_name": "Malloc disk", 00:16:56.038 "block_size": 4096, 00:16:56.038 "num_blocks": 8192, 00:16:56.038 "uuid": "828146e0-eadd-4870-8e2b-ab946d167f55", 00:16:56.038 "md_size": 32, 00:16:56.038 "md_interleave": false, 00:16:56.038 "dif_type": 0, 00:16:56.038 "assigned_rate_limits": { 00:16:56.038 "rw_ios_per_sec": 0, 00:16:56.038 "rw_mbytes_per_sec": 0, 00:16:56.038 "r_mbytes_per_sec": 0, 00:16:56.038 "w_mbytes_per_sec": 0 00:16:56.038 }, 00:16:56.038 "claimed": true, 00:16:56.038 "claim_type": "exclusive_write", 00:16:56.038 "zoned": false, 00:16:56.038 "supported_io_types": { 00:16:56.038 "read": true, 00:16:56.038 "write": true, 00:16:56.038 "unmap": true, 00:16:56.038 "flush": true, 00:16:56.038 "reset": true, 00:16:56.038 "nvme_admin": false, 00:16:56.038 "nvme_io": false, 00:16:56.038 "nvme_io_md": false, 00:16:56.038 "write_zeroes": true, 00:16:56.038 "zcopy": true, 00:16:56.038 "get_zone_info": false, 00:16:56.038 "zone_management": false, 00:16:56.038 "zone_append": false, 00:16:56.038 "compare": false, 00:16:56.038 "compare_and_write": false, 00:16:56.038 "abort": true, 00:16:56.038 "seek_hole": false, 00:16:56.038 "seek_data": false, 00:16:56.038 "copy": true, 00:16:56.038 "nvme_iov_md": false 00:16:56.038 }, 00:16:56.038 "memory_domains": [ 00:16:56.038 { 00:16:56.038 "dma_device_id": "system", 00:16:56.038 "dma_device_type": 1 00:16:56.038 }, 00:16:56.038 { 00:16:56.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.038 "dma_device_type": 2 00:16:56.038 } 00:16:56.038 ], 00:16:56.038 "driver_specific": {} 00:16:56.038 } 00:16:56.038 ] 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.038 "name": "Existed_Raid", 00:16:56.038 "uuid": "c3a78357-b48a-45c5-9ff3-68475078d9f9", 00:16:56.038 "strip_size_kb": 0, 00:16:56.038 "state": "configuring", 00:16:56.038 "raid_level": "raid1", 00:16:56.038 "superblock": true, 00:16:56.038 "num_base_bdevs": 2, 00:16:56.038 "num_base_bdevs_discovered": 1, 00:16:56.038 "num_base_bdevs_operational": 2, 00:16:56.038 "base_bdevs_list": [ 00:16:56.038 { 00:16:56.038 "name": "BaseBdev1", 00:16:56.038 "uuid": "828146e0-eadd-4870-8e2b-ab946d167f55", 00:16:56.038 "is_configured": true, 00:16:56.038 "data_offset": 256, 00:16:56.038 "data_size": 7936 00:16:56.038 }, 00:16:56.038 { 00:16:56.038 "name": "BaseBdev2", 00:16:56.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.038 "is_configured": false, 00:16:56.038 "data_offset": 0, 00:16:56.038 "data_size": 0 00:16:56.038 } 00:16:56.038 ] 00:16:56.038 }' 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.038 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.298 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:56.298 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.298 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.298 [2024-11-28 02:32:29.975257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.298 [2024-11-28 02:32:29.975354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.558 [2024-11-28 02:32:29.987287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.558 [2024-11-28 02:32:29.989074] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.558 [2024-11-28 02:32:29.989150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.558 02:32:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.558 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.558 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.558 "name": "Existed_Raid", 00:16:56.558 "uuid": "66819abc-adfb-4265-bf22-e33c52233bdf", 00:16:56.558 "strip_size_kb": 0, 00:16:56.558 "state": "configuring", 00:16:56.558 "raid_level": "raid1", 00:16:56.558 "superblock": true, 00:16:56.558 "num_base_bdevs": 2, 00:16:56.558 "num_base_bdevs_discovered": 1, 00:16:56.558 "num_base_bdevs_operational": 2, 00:16:56.558 "base_bdevs_list": [ 00:16:56.558 { 00:16:56.558 "name": "BaseBdev1", 00:16:56.558 "uuid": "828146e0-eadd-4870-8e2b-ab946d167f55", 00:16:56.558 "is_configured": true, 00:16:56.558 "data_offset": 256, 00:16:56.558 "data_size": 7936 00:16:56.558 }, 00:16:56.558 { 00:16:56.558 "name": "BaseBdev2", 00:16:56.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.558 "is_configured": false, 00:16:56.559 "data_offset": 0, 00:16:56.559 "data_size": 0 00:16:56.559 } 00:16:56.559 ] 00:16:56.559 }' 00:16:56.559 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.559 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.818 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:56.818 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.818 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.818 [2024-11-28 02:32:30.454224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.818 [2024-11-28 02:32:30.454535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:56.818 [2024-11-28 02:32:30.454576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:56.818 [2024-11-28 02:32:30.454687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:56.818 [2024-11-28 02:32:30.454847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:56.818 [2024-11-28 02:32:30.454890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:56.818 [2024-11-28 02:32:30.455032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.818 BaseBdev2 00:16:56.818 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.818 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:56.818 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:56.818 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.818 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:56.818 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.818 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.819 [ 00:16:56.819 { 00:16:56.819 "name": "BaseBdev2", 00:16:56.819 "aliases": [ 00:16:56.819 "9c3f0b65-7401-4341-acf6-ceab06506973" 00:16:56.819 ], 00:16:56.819 "product_name": "Malloc disk", 00:16:56.819 "block_size": 4096, 00:16:56.819 "num_blocks": 8192, 00:16:56.819 "uuid": "9c3f0b65-7401-4341-acf6-ceab06506973", 00:16:56.819 "md_size": 32, 00:16:56.819 "md_interleave": false, 00:16:56.819 "dif_type": 0, 00:16:56.819 "assigned_rate_limits": { 00:16:56.819 "rw_ios_per_sec": 0, 00:16:56.819 "rw_mbytes_per_sec": 0, 00:16:56.819 "r_mbytes_per_sec": 0, 00:16:56.819 "w_mbytes_per_sec": 0 00:16:56.819 }, 00:16:56.819 "claimed": true, 00:16:56.819 "claim_type": "exclusive_write", 00:16:56.819 "zoned": false, 00:16:56.819 "supported_io_types": { 00:16:56.819 "read": true, 00:16:56.819 "write": true, 00:16:56.819 "unmap": true, 00:16:56.819 "flush": true, 00:16:56.819 "reset": true, 00:16:56.819 "nvme_admin": false, 00:16:56.819 "nvme_io": false, 00:16:56.819 "nvme_io_md": false, 00:16:56.819 "write_zeroes": true, 00:16:56.819 "zcopy": true, 00:16:56.819 "get_zone_info": false, 00:16:56.819 "zone_management": false, 00:16:56.819 "zone_append": false, 00:16:56.819 "compare": false, 00:16:56.819 "compare_and_write": false, 00:16:56.819 "abort": true, 00:16:56.819 "seek_hole": false, 00:16:56.819 "seek_data": false, 00:16:56.819 "copy": true, 00:16:56.819 "nvme_iov_md": false 00:16:56.819 }, 00:16:56.819 "memory_domains": [ 00:16:56.819 { 00:16:56.819 "dma_device_id": "system", 00:16:56.819 "dma_device_type": 1 00:16:56.819 }, 00:16:56.819 { 00:16:56.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.819 "dma_device_type": 2 00:16:56.819 } 00:16:56.819 ], 00:16:56.819 "driver_specific": {} 00:16:56.819 } 00:16:56.819 ] 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.819 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.079 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.079 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.079 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.079 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.079 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.079 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.079 "name": "Existed_Raid", 00:16:57.079 "uuid": "66819abc-adfb-4265-bf22-e33c52233bdf", 00:16:57.079 "strip_size_kb": 0, 00:16:57.079 "state": "online", 00:16:57.079 "raid_level": "raid1", 00:16:57.079 "superblock": true, 00:16:57.079 "num_base_bdevs": 2, 00:16:57.079 "num_base_bdevs_discovered": 2, 00:16:57.079 "num_base_bdevs_operational": 2, 00:16:57.079 "base_bdevs_list": [ 00:16:57.079 { 00:16:57.079 "name": "BaseBdev1", 00:16:57.079 "uuid": "828146e0-eadd-4870-8e2b-ab946d167f55", 00:16:57.079 "is_configured": true, 00:16:57.079 "data_offset": 256, 00:16:57.079 "data_size": 7936 00:16:57.079 }, 00:16:57.079 { 00:16:57.079 "name": "BaseBdev2", 00:16:57.079 "uuid": "9c3f0b65-7401-4341-acf6-ceab06506973", 00:16:57.079 "is_configured": true, 00:16:57.079 "data_offset": 256, 00:16:57.079 "data_size": 7936 00:16:57.079 } 00:16:57.079 ] 00:16:57.079 }' 00:16:57.079 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.079 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.339 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:57.339 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:57.339 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:57.339 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:57.339 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:57.339 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:57.339 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:57.339 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:57.339 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.339 02:32:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.339 [2024-11-28 02:32:30.993663] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.339 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.599 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:57.599 "name": "Existed_Raid", 00:16:57.599 "aliases": [ 00:16:57.599 "66819abc-adfb-4265-bf22-e33c52233bdf" 00:16:57.599 ], 00:16:57.599 "product_name": "Raid Volume", 00:16:57.599 "block_size": 4096, 00:16:57.599 "num_blocks": 7936, 00:16:57.599 "uuid": "66819abc-adfb-4265-bf22-e33c52233bdf", 00:16:57.599 "md_size": 32, 00:16:57.599 "md_interleave": false, 00:16:57.599 "dif_type": 0, 00:16:57.599 "assigned_rate_limits": { 00:16:57.599 "rw_ios_per_sec": 0, 00:16:57.599 "rw_mbytes_per_sec": 0, 00:16:57.599 "r_mbytes_per_sec": 0, 00:16:57.599 "w_mbytes_per_sec": 0 00:16:57.599 }, 00:16:57.599 "claimed": false, 00:16:57.599 "zoned": false, 00:16:57.599 "supported_io_types": { 00:16:57.599 "read": true, 00:16:57.599 "write": true, 00:16:57.599 "unmap": false, 00:16:57.599 "flush": false, 00:16:57.599 "reset": true, 00:16:57.599 "nvme_admin": false, 00:16:57.599 "nvme_io": false, 00:16:57.599 "nvme_io_md": false, 00:16:57.599 "write_zeroes": true, 00:16:57.599 "zcopy": false, 00:16:57.599 "get_zone_info": false, 00:16:57.599 "zone_management": false, 00:16:57.599 "zone_append": false, 00:16:57.599 "compare": false, 00:16:57.599 "compare_and_write": false, 00:16:57.599 "abort": false, 00:16:57.599 "seek_hole": false, 00:16:57.599 "seek_data": false, 00:16:57.599 "copy": false, 00:16:57.600 "nvme_iov_md": false 00:16:57.600 }, 00:16:57.600 "memory_domains": [ 00:16:57.600 { 00:16:57.600 "dma_device_id": "system", 00:16:57.600 "dma_device_type": 1 00:16:57.600 }, 00:16:57.600 { 00:16:57.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.600 "dma_device_type": 2 00:16:57.600 }, 00:16:57.600 { 00:16:57.600 "dma_device_id": "system", 00:16:57.600 "dma_device_type": 1 00:16:57.600 }, 00:16:57.600 { 00:16:57.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.600 "dma_device_type": 2 00:16:57.600 } 00:16:57.600 ], 00:16:57.600 "driver_specific": { 00:16:57.600 "raid": { 00:16:57.600 "uuid": "66819abc-adfb-4265-bf22-e33c52233bdf", 00:16:57.600 "strip_size_kb": 0, 00:16:57.600 "state": "online", 00:16:57.600 "raid_level": "raid1", 00:16:57.600 "superblock": true, 00:16:57.600 "num_base_bdevs": 2, 00:16:57.600 "num_base_bdevs_discovered": 2, 00:16:57.600 "num_base_bdevs_operational": 2, 00:16:57.600 "base_bdevs_list": [ 00:16:57.600 { 00:16:57.600 "name": "BaseBdev1", 00:16:57.600 "uuid": "828146e0-eadd-4870-8e2b-ab946d167f55", 00:16:57.600 "is_configured": true, 00:16:57.600 "data_offset": 256, 00:16:57.600 "data_size": 7936 00:16:57.600 }, 00:16:57.600 { 00:16:57.600 "name": "BaseBdev2", 00:16:57.600 "uuid": "9c3f0b65-7401-4341-acf6-ceab06506973", 00:16:57.600 "is_configured": true, 00:16:57.600 "data_offset": 256, 00:16:57.600 "data_size": 7936 00:16:57.600 } 00:16:57.600 ] 00:16:57.600 } 00:16:57.600 } 00:16:57.600 }' 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:57.600 BaseBdev2' 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.600 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.600 [2024-11-28 02:32:31.201039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.860 "name": "Existed_Raid", 00:16:57.860 "uuid": "66819abc-adfb-4265-bf22-e33c52233bdf", 00:16:57.860 "strip_size_kb": 0, 00:16:57.860 "state": "online", 00:16:57.860 "raid_level": "raid1", 00:16:57.860 "superblock": true, 00:16:57.860 "num_base_bdevs": 2, 00:16:57.860 "num_base_bdevs_discovered": 1, 00:16:57.860 "num_base_bdevs_operational": 1, 00:16:57.860 "base_bdevs_list": [ 00:16:57.860 { 00:16:57.860 "name": null, 00:16:57.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.860 "is_configured": false, 00:16:57.860 "data_offset": 0, 00:16:57.860 "data_size": 7936 00:16:57.860 }, 00:16:57.860 { 00:16:57.860 "name": "BaseBdev2", 00:16:57.860 "uuid": "9c3f0b65-7401-4341-acf6-ceab06506973", 00:16:57.860 "is_configured": true, 00:16:57.860 "data_offset": 256, 00:16:57.860 "data_size": 7936 00:16:57.860 } 00:16:57.860 ] 00:16:57.860 }' 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.860 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.120 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:58.120 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:58.121 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.121 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:58.121 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.121 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.121 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.121 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:58.121 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.121 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:58.121 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.121 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.121 [2024-11-28 02:32:31.766574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:58.121 [2024-11-28 02:32:31.766721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.381 [2024-11-28 02:32:31.865547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.381 [2024-11-28 02:32:31.865644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.381 [2024-11-28 02:32:31.865685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86923 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86923 ']' 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86923 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86923 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.381 killing process with pid 86923 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86923' 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86923 00:16:58.381 [2024-11-28 02:32:31.954296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.381 02:32:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86923 00:16:58.381 [2024-11-28 02:32:31.970560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.377 ************************************ 00:16:59.377 END TEST raid_state_function_test_sb_md_separate 00:16:59.377 ************************************ 00:16:59.377 02:32:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:59.377 00:16:59.377 real 0m4.921s 00:16:59.377 user 0m7.081s 00:16:59.377 sys 0m0.807s 00:16:59.377 02:32:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.377 02:32:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.637 02:32:33 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:59.637 02:32:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:59.637 02:32:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.637 02:32:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.637 ************************************ 00:16:59.637 START TEST raid_superblock_test_md_separate 00:16:59.637 ************************************ 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87176 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87176 00:16:59.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87176 ']' 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.637 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.637 [2024-11-28 02:32:33.176610] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:59.637 [2024-11-28 02:32:33.176723] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87176 ] 00:16:59.897 [2024-11-28 02:32:33.349747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.897 [2024-11-28 02:32:33.456320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.156 [2024-11-28 02:32:33.646898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.156 [2024-11-28 02:32:33.646947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.416 02:32:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.416 malloc1 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.416 [2024-11-28 02:32:34.033740] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:00.416 [2024-11-28 02:32:34.033854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.416 [2024-11-28 02:32:34.033893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:00.416 [2024-11-28 02:32:34.033929] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.416 [2024-11-28 02:32:34.035725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.416 [2024-11-28 02:32:34.035794] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:00.416 pt1 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.416 malloc2 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.416 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.416 [2024-11-28 02:32:34.090281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:00.416 [2024-11-28 02:32:34.090378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.416 [2024-11-28 02:32:34.090431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:00.416 [2024-11-28 02:32:34.090459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.416 [2024-11-28 02:32:34.092305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.416 [2024-11-28 02:32:34.092363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:00.677 pt2 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.677 [2024-11-28 02:32:34.102284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:00.677 [2024-11-28 02:32:34.104047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.677 [2024-11-28 02:32:34.104232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:00.677 [2024-11-28 02:32:34.104247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:00.677 [2024-11-28 02:32:34.104317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:00.677 [2024-11-28 02:32:34.104434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:00.677 [2024-11-28 02:32:34.104445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:00.677 [2024-11-28 02:32:34.104534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.677 "name": "raid_bdev1", 00:17:00.677 "uuid": "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71", 00:17:00.677 "strip_size_kb": 0, 00:17:00.677 "state": "online", 00:17:00.677 "raid_level": "raid1", 00:17:00.677 "superblock": true, 00:17:00.677 "num_base_bdevs": 2, 00:17:00.677 "num_base_bdevs_discovered": 2, 00:17:00.677 "num_base_bdevs_operational": 2, 00:17:00.677 "base_bdevs_list": [ 00:17:00.677 { 00:17:00.677 "name": "pt1", 00:17:00.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.677 "is_configured": true, 00:17:00.677 "data_offset": 256, 00:17:00.677 "data_size": 7936 00:17:00.677 }, 00:17:00.677 { 00:17:00.677 "name": "pt2", 00:17:00.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.677 "is_configured": true, 00:17:00.677 "data_offset": 256, 00:17:00.677 "data_size": 7936 00:17:00.677 } 00:17:00.677 ] 00:17:00.677 }' 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.677 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.937 [2024-11-28 02:32:34.517794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:00.937 "name": "raid_bdev1", 00:17:00.937 "aliases": [ 00:17:00.937 "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71" 00:17:00.937 ], 00:17:00.937 "product_name": "Raid Volume", 00:17:00.937 "block_size": 4096, 00:17:00.937 "num_blocks": 7936, 00:17:00.937 "uuid": "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71", 00:17:00.937 "md_size": 32, 00:17:00.937 "md_interleave": false, 00:17:00.937 "dif_type": 0, 00:17:00.937 "assigned_rate_limits": { 00:17:00.937 "rw_ios_per_sec": 0, 00:17:00.937 "rw_mbytes_per_sec": 0, 00:17:00.937 "r_mbytes_per_sec": 0, 00:17:00.937 "w_mbytes_per_sec": 0 00:17:00.937 }, 00:17:00.937 "claimed": false, 00:17:00.937 "zoned": false, 00:17:00.937 "supported_io_types": { 00:17:00.937 "read": true, 00:17:00.937 "write": true, 00:17:00.937 "unmap": false, 00:17:00.937 "flush": false, 00:17:00.937 "reset": true, 00:17:00.937 "nvme_admin": false, 00:17:00.937 "nvme_io": false, 00:17:00.937 "nvme_io_md": false, 00:17:00.937 "write_zeroes": true, 00:17:00.937 "zcopy": false, 00:17:00.937 "get_zone_info": false, 00:17:00.937 "zone_management": false, 00:17:00.937 "zone_append": false, 00:17:00.937 "compare": false, 00:17:00.937 "compare_and_write": false, 00:17:00.937 "abort": false, 00:17:00.937 "seek_hole": false, 00:17:00.937 "seek_data": false, 00:17:00.937 "copy": false, 00:17:00.937 "nvme_iov_md": false 00:17:00.937 }, 00:17:00.937 "memory_domains": [ 00:17:00.937 { 00:17:00.937 "dma_device_id": "system", 00:17:00.937 "dma_device_type": 1 00:17:00.937 }, 00:17:00.937 { 00:17:00.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.937 "dma_device_type": 2 00:17:00.937 }, 00:17:00.937 { 00:17:00.937 "dma_device_id": "system", 00:17:00.937 "dma_device_type": 1 00:17:00.937 }, 00:17:00.937 { 00:17:00.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.937 "dma_device_type": 2 00:17:00.937 } 00:17:00.937 ], 00:17:00.937 "driver_specific": { 00:17:00.937 "raid": { 00:17:00.937 "uuid": "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71", 00:17:00.937 "strip_size_kb": 0, 00:17:00.937 "state": "online", 00:17:00.937 "raid_level": "raid1", 00:17:00.937 "superblock": true, 00:17:00.937 "num_base_bdevs": 2, 00:17:00.937 "num_base_bdevs_discovered": 2, 00:17:00.937 "num_base_bdevs_operational": 2, 00:17:00.937 "base_bdevs_list": [ 00:17:00.937 { 00:17:00.937 "name": "pt1", 00:17:00.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.937 "is_configured": true, 00:17:00.937 "data_offset": 256, 00:17:00.937 "data_size": 7936 00:17:00.937 }, 00:17:00.937 { 00:17:00.937 "name": "pt2", 00:17:00.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.937 "is_configured": true, 00:17:00.937 "data_offset": 256, 00:17:00.937 "data_size": 7936 00:17:00.937 } 00:17:00.937 ] 00:17:00.937 } 00:17:00.937 } 00:17:00.937 }' 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.937 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:00.937 pt2' 00:17:00.938 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.938 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:00.938 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 [2024-11-28 02:32:34.701423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71 ']' 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 [2024-11-28 02:32:34.745115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.198 [2024-11-28 02:32:34.745171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.198 [2024-11-28 02:32:34.745261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.198 [2024-11-28 02:32:34.745338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.198 [2024-11-28 02:32:34.745388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.198 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.199 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.459 [2024-11-28 02:32:34.876914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:01.459 [2024-11-28 02:32:34.878677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:01.459 [2024-11-28 02:32:34.878802] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:01.459 [2024-11-28 02:32:34.878909] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:01.459 [2024-11-28 02:32:34.878969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.459 [2024-11-28 02:32:34.879032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:01.459 request: 00:17:01.459 { 00:17:01.459 "name": "raid_bdev1", 00:17:01.459 "raid_level": "raid1", 00:17:01.459 "base_bdevs": [ 00:17:01.459 "malloc1", 00:17:01.459 "malloc2" 00:17:01.459 ], 00:17:01.459 "superblock": false, 00:17:01.459 "method": "bdev_raid_create", 00:17:01.459 "req_id": 1 00:17:01.459 } 00:17:01.459 Got JSON-RPC error response 00:17:01.459 response: 00:17:01.459 { 00:17:01.459 "code": -17, 00:17:01.459 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:01.459 } 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.459 [2024-11-28 02:32:34.932803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.459 [2024-11-28 02:32:34.932887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.459 [2024-11-28 02:32:34.932925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:01.459 [2024-11-28 02:32:34.932955] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.459 [2024-11-28 02:32:34.934791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.459 [2024-11-28 02:32:34.934869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.459 [2024-11-28 02:32:34.934962] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:01.459 [2024-11-28 02:32:34.935034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.459 pt1 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.459 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.460 "name": "raid_bdev1", 00:17:01.460 "uuid": "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71", 00:17:01.460 "strip_size_kb": 0, 00:17:01.460 "state": "configuring", 00:17:01.460 "raid_level": "raid1", 00:17:01.460 "superblock": true, 00:17:01.460 "num_base_bdevs": 2, 00:17:01.460 "num_base_bdevs_discovered": 1, 00:17:01.460 "num_base_bdevs_operational": 2, 00:17:01.460 "base_bdevs_list": [ 00:17:01.460 { 00:17:01.460 "name": "pt1", 00:17:01.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:01.460 "is_configured": true, 00:17:01.460 "data_offset": 256, 00:17:01.460 "data_size": 7936 00:17:01.460 }, 00:17:01.460 { 00:17:01.460 "name": null, 00:17:01.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.460 "is_configured": false, 00:17:01.460 "data_offset": 256, 00:17:01.460 "data_size": 7936 00:17:01.460 } 00:17:01.460 ] 00:17:01.460 }' 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.460 02:32:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.720 [2024-11-28 02:32:35.356109] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.720 [2024-11-28 02:32:35.356217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.720 [2024-11-28 02:32:35.356255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:01.720 [2024-11-28 02:32:35.356284] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.720 [2024-11-28 02:32:35.356531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.720 [2024-11-28 02:32:35.356582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.720 [2024-11-28 02:32:35.356657] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:01.720 [2024-11-28 02:32:35.356705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.720 [2024-11-28 02:32:35.356849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:01.720 [2024-11-28 02:32:35.356888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:01.720 [2024-11-28 02:32:35.356995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:01.720 [2024-11-28 02:32:35.357148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:01.720 [2024-11-28 02:32:35.357184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:01.720 [2024-11-28 02:32:35.357326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.720 pt2 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.720 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.979 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.979 "name": "raid_bdev1", 00:17:01.979 "uuid": "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71", 00:17:01.979 "strip_size_kb": 0, 00:17:01.979 "state": "online", 00:17:01.979 "raid_level": "raid1", 00:17:01.979 "superblock": true, 00:17:01.979 "num_base_bdevs": 2, 00:17:01.979 "num_base_bdevs_discovered": 2, 00:17:01.979 "num_base_bdevs_operational": 2, 00:17:01.979 "base_bdevs_list": [ 00:17:01.979 { 00:17:01.979 "name": "pt1", 00:17:01.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:01.979 "is_configured": true, 00:17:01.979 "data_offset": 256, 00:17:01.979 "data_size": 7936 00:17:01.979 }, 00:17:01.979 { 00:17:01.979 "name": "pt2", 00:17:01.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.979 "is_configured": true, 00:17:01.979 "data_offset": 256, 00:17:01.979 "data_size": 7936 00:17:01.979 } 00:17:01.979 ] 00:17:01.979 }' 00:17:01.979 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.979 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.239 [2024-11-28 02:32:35.819532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.239 "name": "raid_bdev1", 00:17:02.239 "aliases": [ 00:17:02.239 "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71" 00:17:02.239 ], 00:17:02.239 "product_name": "Raid Volume", 00:17:02.239 "block_size": 4096, 00:17:02.239 "num_blocks": 7936, 00:17:02.239 "uuid": "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71", 00:17:02.239 "md_size": 32, 00:17:02.239 "md_interleave": false, 00:17:02.239 "dif_type": 0, 00:17:02.239 "assigned_rate_limits": { 00:17:02.239 "rw_ios_per_sec": 0, 00:17:02.239 "rw_mbytes_per_sec": 0, 00:17:02.239 "r_mbytes_per_sec": 0, 00:17:02.239 "w_mbytes_per_sec": 0 00:17:02.239 }, 00:17:02.239 "claimed": false, 00:17:02.239 "zoned": false, 00:17:02.239 "supported_io_types": { 00:17:02.239 "read": true, 00:17:02.239 "write": true, 00:17:02.239 "unmap": false, 00:17:02.239 "flush": false, 00:17:02.239 "reset": true, 00:17:02.239 "nvme_admin": false, 00:17:02.239 "nvme_io": false, 00:17:02.239 "nvme_io_md": false, 00:17:02.239 "write_zeroes": true, 00:17:02.239 "zcopy": false, 00:17:02.239 "get_zone_info": false, 00:17:02.239 "zone_management": false, 00:17:02.239 "zone_append": false, 00:17:02.239 "compare": false, 00:17:02.239 "compare_and_write": false, 00:17:02.239 "abort": false, 00:17:02.239 "seek_hole": false, 00:17:02.239 "seek_data": false, 00:17:02.239 "copy": false, 00:17:02.239 "nvme_iov_md": false 00:17:02.239 }, 00:17:02.239 "memory_domains": [ 00:17:02.239 { 00:17:02.239 "dma_device_id": "system", 00:17:02.239 "dma_device_type": 1 00:17:02.239 }, 00:17:02.239 { 00:17:02.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.239 "dma_device_type": 2 00:17:02.239 }, 00:17:02.239 { 00:17:02.239 "dma_device_id": "system", 00:17:02.239 "dma_device_type": 1 00:17:02.239 }, 00:17:02.239 { 00:17:02.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.239 "dma_device_type": 2 00:17:02.239 } 00:17:02.239 ], 00:17:02.239 "driver_specific": { 00:17:02.239 "raid": { 00:17:02.239 "uuid": "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71", 00:17:02.239 "strip_size_kb": 0, 00:17:02.239 "state": "online", 00:17:02.239 "raid_level": "raid1", 00:17:02.239 "superblock": true, 00:17:02.239 "num_base_bdevs": 2, 00:17:02.239 "num_base_bdevs_discovered": 2, 00:17:02.239 "num_base_bdevs_operational": 2, 00:17:02.239 "base_bdevs_list": [ 00:17:02.239 { 00:17:02.239 "name": "pt1", 00:17:02.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.239 "is_configured": true, 00:17:02.239 "data_offset": 256, 00:17:02.239 "data_size": 7936 00:17:02.239 }, 00:17:02.239 { 00:17:02.239 "name": "pt2", 00:17:02.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.239 "is_configured": true, 00:17:02.239 "data_offset": 256, 00:17:02.239 "data_size": 7936 00:17:02.239 } 00:17:02.239 ] 00:17:02.239 } 00:17:02.239 } 00:17:02.239 }' 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:02.239 pt2' 00:17:02.239 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.499 02:32:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 [2024-11-28 02:32:36.035161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71 '!=' a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71 ']' 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 [2024-11-28 02:32:36.078865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:02.499 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.500 "name": "raid_bdev1", 00:17:02.500 "uuid": "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71", 00:17:02.500 "strip_size_kb": 0, 00:17:02.500 "state": "online", 00:17:02.500 "raid_level": "raid1", 00:17:02.500 "superblock": true, 00:17:02.500 "num_base_bdevs": 2, 00:17:02.500 "num_base_bdevs_discovered": 1, 00:17:02.500 "num_base_bdevs_operational": 1, 00:17:02.500 "base_bdevs_list": [ 00:17:02.500 { 00:17:02.500 "name": null, 00:17:02.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.500 "is_configured": false, 00:17:02.500 "data_offset": 0, 00:17:02.500 "data_size": 7936 00:17:02.500 }, 00:17:02.500 { 00:17:02.500 "name": "pt2", 00:17:02.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.500 "is_configured": true, 00:17:02.500 "data_offset": 256, 00:17:02.500 "data_size": 7936 00:17:02.500 } 00:17:02.500 ] 00:17:02.500 }' 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.500 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.070 [2024-11-28 02:32:36.506128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.070 [2024-11-28 02:32:36.506192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.070 [2024-11-28 02:32:36.506293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.070 [2024-11-28 02:32:36.506353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.070 [2024-11-28 02:32:36.506411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.070 [2024-11-28 02:32:36.585998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.070 [2024-11-28 02:32:36.586101] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.070 [2024-11-28 02:32:36.586131] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:03.070 [2024-11-28 02:32:36.586159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.070 [2024-11-28 02:32:36.588049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.070 [2024-11-28 02:32:36.588138] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.070 [2024-11-28 02:32:36.588205] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:03.070 [2024-11-28 02:32:36.588266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.070 [2024-11-28 02:32:36.588366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:03.070 [2024-11-28 02:32:36.588408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.070 [2024-11-28 02:32:36.588495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:03.070 [2024-11-28 02:32:36.588637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:03.070 [2024-11-28 02:32:36.588672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:03.070 [2024-11-28 02:32:36.588799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.070 pt2 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.070 "name": "raid_bdev1", 00:17:03.070 "uuid": "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71", 00:17:03.070 "strip_size_kb": 0, 00:17:03.070 "state": "online", 00:17:03.070 "raid_level": "raid1", 00:17:03.070 "superblock": true, 00:17:03.070 "num_base_bdevs": 2, 00:17:03.070 "num_base_bdevs_discovered": 1, 00:17:03.070 "num_base_bdevs_operational": 1, 00:17:03.070 "base_bdevs_list": [ 00:17:03.070 { 00:17:03.070 "name": null, 00:17:03.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.070 "is_configured": false, 00:17:03.070 "data_offset": 256, 00:17:03.070 "data_size": 7936 00:17:03.070 }, 00:17:03.070 { 00:17:03.070 "name": "pt2", 00:17:03.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.070 "is_configured": true, 00:17:03.070 "data_offset": 256, 00:17:03.070 "data_size": 7936 00:17:03.070 } 00:17:03.070 ] 00:17:03.070 }' 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.070 02:32:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 [2024-11-28 02:32:37.017253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.640 [2024-11-28 02:32:37.017327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.640 [2024-11-28 02:32:37.017412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.640 [2024-11-28 02:32:37.017470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.640 [2024-11-28 02:32:37.017502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 [2024-11-28 02:32:37.061228] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:03.640 [2024-11-28 02:32:37.061308] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.640 [2024-11-28 02:32:37.061340] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:03.640 [2024-11-28 02:32:37.061365] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.640 [2024-11-28 02:32:37.063222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.640 [2024-11-28 02:32:37.063300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:03.640 [2024-11-28 02:32:37.063367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:03.640 [2024-11-28 02:32:37.063428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:03.640 [2024-11-28 02:32:37.063571] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:03.640 [2024-11-28 02:32:37.063645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.640 [2024-11-28 02:32:37.063685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:03.640 [2024-11-28 02:32:37.063803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.640 [2024-11-28 02:32:37.063899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:03.640 [2024-11-28 02:32:37.063950] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.640 [2024-11-28 02:32:37.064026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:03.640 [2024-11-28 02:32:37.064155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:03.640 [2024-11-28 02:32:37.064192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:03.640 [2024-11-28 02:32:37.064329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.640 pt1 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.640 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.640 "name": "raid_bdev1", 00:17:03.640 "uuid": "a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71", 00:17:03.640 "strip_size_kb": 0, 00:17:03.640 "state": "online", 00:17:03.640 "raid_level": "raid1", 00:17:03.640 "superblock": true, 00:17:03.640 "num_base_bdevs": 2, 00:17:03.640 "num_base_bdevs_discovered": 1, 00:17:03.641 "num_base_bdevs_operational": 1, 00:17:03.641 "base_bdevs_list": [ 00:17:03.641 { 00:17:03.641 "name": null, 00:17:03.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.641 "is_configured": false, 00:17:03.641 "data_offset": 256, 00:17:03.641 "data_size": 7936 00:17:03.641 }, 00:17:03.641 { 00:17:03.641 "name": "pt2", 00:17:03.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.641 "is_configured": true, 00:17:03.641 "data_offset": 256, 00:17:03.641 "data_size": 7936 00:17:03.641 } 00:17:03.641 ] 00:17:03.641 }' 00:17:03.641 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.641 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.900 [2024-11-28 02:32:37.524620] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71 '!=' a2abacd8-0e42-4bc2-a63e-2d3a9d4b8c71 ']' 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87176 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87176 ']' 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87176 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.900 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87176 00:17:04.160 killing process with pid 87176 00:17:04.160 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.160 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.160 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87176' 00:17:04.160 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87176 00:17:04.160 [2024-11-28 02:32:37.601124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.160 [2024-11-28 02:32:37.601199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.160 [2024-11-28 02:32:37.601243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.160 [2024-11-28 02:32:37.601259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:04.160 02:32:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87176 00:17:04.160 [2024-11-28 02:32:37.808212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.539 02:32:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:05.539 00:17:05.539 real 0m5.772s 00:17:05.539 user 0m8.675s 00:17:05.539 sys 0m1.066s 00:17:05.539 02:32:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.539 02:32:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.539 ************************************ 00:17:05.539 END TEST raid_superblock_test_md_separate 00:17:05.539 ************************************ 00:17:05.539 02:32:38 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:05.539 02:32:38 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:05.539 02:32:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:05.539 02:32:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.539 02:32:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.539 ************************************ 00:17:05.539 START TEST raid_rebuild_test_sb_md_separate 00:17:05.539 ************************************ 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:05.539 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87499 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87499 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87499 ']' 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.540 02:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.540 [2024-11-28 02:32:39.033100] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:05.540 [2024-11-28 02:32:39.033290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:05.540 Zero copy mechanism will not be used. 00:17:05.540 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87499 ] 00:17:05.540 [2024-11-28 02:32:39.203858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.799 [2024-11-28 02:32:39.306537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.059 [2024-11-28 02:32:39.495784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.059 [2024-11-28 02:32:39.495914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.319 BaseBdev1_malloc 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.319 [2024-11-28 02:32:39.893580] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.319 [2024-11-28 02:32:39.893708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.319 [2024-11-28 02:32:39.893746] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.319 [2024-11-28 02:32:39.893778] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.319 [2024-11-28 02:32:39.895704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.319 [2024-11-28 02:32:39.895776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.319 BaseBdev1 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.319 BaseBdev2_malloc 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.319 [2024-11-28 02:32:39.949043] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:06.319 [2024-11-28 02:32:39.949170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.319 [2024-11-28 02:32:39.949210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.319 [2024-11-28 02:32:39.949250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.319 [2024-11-28 02:32:39.951235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.319 [2024-11-28 02:32:39.951305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.319 BaseBdev2 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.319 02:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.579 spare_malloc 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.579 spare_delay 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.579 [2024-11-28 02:32:40.022567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.579 [2024-11-28 02:32:40.022683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.579 [2024-11-28 02:32:40.022719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:06.579 [2024-11-28 02:32:40.022766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.579 [2024-11-28 02:32:40.024573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.579 [2024-11-28 02:32:40.024647] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.579 spare 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.579 [2024-11-28 02:32:40.034589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.579 [2024-11-28 02:32:40.036358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.579 [2024-11-28 02:32:40.036588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:06.579 [2024-11-28 02:32:40.036634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:06.579 [2024-11-28 02:32:40.036728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:06.579 [2024-11-28 02:32:40.036882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:06.579 [2024-11-28 02:32:40.036896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:06.579 [2024-11-28 02:32:40.037022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.579 "name": "raid_bdev1", 00:17:06.579 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:06.579 "strip_size_kb": 0, 00:17:06.579 "state": "online", 00:17:06.579 "raid_level": "raid1", 00:17:06.579 "superblock": true, 00:17:06.579 "num_base_bdevs": 2, 00:17:06.579 "num_base_bdevs_discovered": 2, 00:17:06.579 "num_base_bdevs_operational": 2, 00:17:06.579 "base_bdevs_list": [ 00:17:06.579 { 00:17:06.579 "name": "BaseBdev1", 00:17:06.579 "uuid": "c5a1de31-efd0-5de7-b100-a569df3d1bb6", 00:17:06.579 "is_configured": true, 00:17:06.579 "data_offset": 256, 00:17:06.579 "data_size": 7936 00:17:06.579 }, 00:17:06.579 { 00:17:06.579 "name": "BaseBdev2", 00:17:06.579 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:06.579 "is_configured": true, 00:17:06.579 "data_offset": 256, 00:17:06.579 "data_size": 7936 00:17:06.579 } 00:17:06.579 ] 00:17:06.579 }' 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.579 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.149 [2024-11-28 02:32:40.533989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.149 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:07.149 [2024-11-28 02:32:40.805370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:07.149 /dev/nbd0 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.409 1+0 records in 00:17:07.409 1+0 records out 00:17:07.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490065 s, 8.4 MB/s 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:07.409 02:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:07.979 7936+0 records in 00:17:07.979 7936+0 records out 00:17:07.979 32505856 bytes (33 MB, 31 MiB) copied, 0.589014 s, 55.2 MB/s 00:17:07.979 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:07.979 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.979 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:07.979 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.979 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:07.979 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.979 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.240 [2024-11-28 02:32:41.674968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.240 [2024-11-28 02:32:41.695608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.240 "name": "raid_bdev1", 00:17:08.240 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:08.240 "strip_size_kb": 0, 00:17:08.240 "state": "online", 00:17:08.240 "raid_level": "raid1", 00:17:08.240 "superblock": true, 00:17:08.240 "num_base_bdevs": 2, 00:17:08.240 "num_base_bdevs_discovered": 1, 00:17:08.240 "num_base_bdevs_operational": 1, 00:17:08.240 "base_bdevs_list": [ 00:17:08.240 { 00:17:08.240 "name": null, 00:17:08.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.240 "is_configured": false, 00:17:08.240 "data_offset": 0, 00:17:08.240 "data_size": 7936 00:17:08.240 }, 00:17:08.240 { 00:17:08.240 "name": "BaseBdev2", 00:17:08.240 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:08.240 "is_configured": true, 00:17:08.240 "data_offset": 256, 00:17:08.240 "data_size": 7936 00:17:08.240 } 00:17:08.240 ] 00:17:08.240 }' 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.240 02:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.500 02:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:08.500 02:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.500 02:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.500 [2024-11-28 02:32:42.158812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.500 [2024-11-28 02:32:42.171037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:08.500 02:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.501 [2024-11-28 02:32:42.172776] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.501 02:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.883 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.883 "name": "raid_bdev1", 00:17:09.883 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:09.883 "strip_size_kb": 0, 00:17:09.883 "state": "online", 00:17:09.883 "raid_level": "raid1", 00:17:09.883 "superblock": true, 00:17:09.883 "num_base_bdevs": 2, 00:17:09.883 "num_base_bdevs_discovered": 2, 00:17:09.883 "num_base_bdevs_operational": 2, 00:17:09.883 "process": { 00:17:09.883 "type": "rebuild", 00:17:09.883 "target": "spare", 00:17:09.883 "progress": { 00:17:09.883 "blocks": 2560, 00:17:09.883 "percent": 32 00:17:09.883 } 00:17:09.883 }, 00:17:09.883 "base_bdevs_list": [ 00:17:09.883 { 00:17:09.883 "name": "spare", 00:17:09.883 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:09.883 "is_configured": true, 00:17:09.883 "data_offset": 256, 00:17:09.883 "data_size": 7936 00:17:09.883 }, 00:17:09.883 { 00:17:09.883 "name": "BaseBdev2", 00:17:09.883 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:09.883 "is_configured": true, 00:17:09.883 "data_offset": 256, 00:17:09.883 "data_size": 7936 00:17:09.883 } 00:17:09.883 ] 00:17:09.883 }' 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 [2024-11-28 02:32:43.312888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.884 [2024-11-28 02:32:43.377303] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:09.884 [2024-11-28 02:32:43.377378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.884 [2024-11-28 02:32:43.377392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.884 [2024-11-28 02:32:43.377403] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.884 "name": "raid_bdev1", 00:17:09.884 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:09.884 "strip_size_kb": 0, 00:17:09.884 "state": "online", 00:17:09.884 "raid_level": "raid1", 00:17:09.884 "superblock": true, 00:17:09.884 "num_base_bdevs": 2, 00:17:09.884 "num_base_bdevs_discovered": 1, 00:17:09.884 "num_base_bdevs_operational": 1, 00:17:09.884 "base_bdevs_list": [ 00:17:09.884 { 00:17:09.884 "name": null, 00:17:09.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.884 "is_configured": false, 00:17:09.884 "data_offset": 0, 00:17:09.884 "data_size": 7936 00:17:09.884 }, 00:17:09.884 { 00:17:09.884 "name": "BaseBdev2", 00:17:09.884 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:09.884 "is_configured": true, 00:17:09.884 "data_offset": 256, 00:17:09.884 "data_size": 7936 00:17:09.884 } 00:17:09.884 ] 00:17:09.884 }' 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.884 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.144 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.144 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.144 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.144 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.144 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.144 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.144 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.144 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.144 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.403 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.404 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.404 "name": "raid_bdev1", 00:17:10.404 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:10.404 "strip_size_kb": 0, 00:17:10.404 "state": "online", 00:17:10.404 "raid_level": "raid1", 00:17:10.404 "superblock": true, 00:17:10.404 "num_base_bdevs": 2, 00:17:10.404 "num_base_bdevs_discovered": 1, 00:17:10.404 "num_base_bdevs_operational": 1, 00:17:10.404 "base_bdevs_list": [ 00:17:10.404 { 00:17:10.404 "name": null, 00:17:10.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.404 "is_configured": false, 00:17:10.404 "data_offset": 0, 00:17:10.404 "data_size": 7936 00:17:10.404 }, 00:17:10.404 { 00:17:10.404 "name": "BaseBdev2", 00:17:10.404 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:10.404 "is_configured": true, 00:17:10.404 "data_offset": 256, 00:17:10.404 "data_size": 7936 00:17:10.404 } 00:17:10.404 ] 00:17:10.404 }' 00:17:10.404 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.404 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.404 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.404 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.404 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:10.404 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.404 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.404 [2024-11-28 02:32:43.943911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:10.404 [2024-11-28 02:32:43.957365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:10.404 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.404 02:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:10.404 [2024-11-28 02:32:43.959101] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:11.344 02:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.344 02:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.344 02:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.344 02:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.344 02:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.344 02:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.344 02:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.344 02:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.344 02:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.344 02:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.344 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.344 "name": "raid_bdev1", 00:17:11.344 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:11.344 "strip_size_kb": 0, 00:17:11.344 "state": "online", 00:17:11.344 "raid_level": "raid1", 00:17:11.344 "superblock": true, 00:17:11.344 "num_base_bdevs": 2, 00:17:11.344 "num_base_bdevs_discovered": 2, 00:17:11.344 "num_base_bdevs_operational": 2, 00:17:11.344 "process": { 00:17:11.344 "type": "rebuild", 00:17:11.344 "target": "spare", 00:17:11.344 "progress": { 00:17:11.344 "blocks": 2560, 00:17:11.344 "percent": 32 00:17:11.344 } 00:17:11.344 }, 00:17:11.344 "base_bdevs_list": [ 00:17:11.344 { 00:17:11.344 "name": "spare", 00:17:11.344 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:11.344 "is_configured": true, 00:17:11.344 "data_offset": 256, 00:17:11.344 "data_size": 7936 00:17:11.344 }, 00:17:11.344 { 00:17:11.344 "name": "BaseBdev2", 00:17:11.344 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:11.344 "is_configured": true, 00:17:11.344 "data_offset": 256, 00:17:11.344 "data_size": 7936 00:17:11.344 } 00:17:11.344 ] 00:17:11.344 }' 00:17:11.344 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:11.604 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=695 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.604 "name": "raid_bdev1", 00:17:11.604 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:11.604 "strip_size_kb": 0, 00:17:11.604 "state": "online", 00:17:11.604 "raid_level": "raid1", 00:17:11.604 "superblock": true, 00:17:11.604 "num_base_bdevs": 2, 00:17:11.604 "num_base_bdevs_discovered": 2, 00:17:11.604 "num_base_bdevs_operational": 2, 00:17:11.604 "process": { 00:17:11.604 "type": "rebuild", 00:17:11.604 "target": "spare", 00:17:11.604 "progress": { 00:17:11.604 "blocks": 2816, 00:17:11.604 "percent": 35 00:17:11.604 } 00:17:11.604 }, 00:17:11.604 "base_bdevs_list": [ 00:17:11.604 { 00:17:11.604 "name": "spare", 00:17:11.604 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:11.604 "is_configured": true, 00:17:11.604 "data_offset": 256, 00:17:11.604 "data_size": 7936 00:17:11.604 }, 00:17:11.604 { 00:17:11.604 "name": "BaseBdev2", 00:17:11.604 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:11.604 "is_configured": true, 00:17:11.604 "data_offset": 256, 00:17:11.604 "data_size": 7936 00:17:11.604 } 00:17:11.604 ] 00:17:11.604 }' 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.604 02:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.999 "name": "raid_bdev1", 00:17:12.999 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:12.999 "strip_size_kb": 0, 00:17:12.999 "state": "online", 00:17:12.999 "raid_level": "raid1", 00:17:12.999 "superblock": true, 00:17:12.999 "num_base_bdevs": 2, 00:17:12.999 "num_base_bdevs_discovered": 2, 00:17:12.999 "num_base_bdevs_operational": 2, 00:17:12.999 "process": { 00:17:12.999 "type": "rebuild", 00:17:12.999 "target": "spare", 00:17:12.999 "progress": { 00:17:12.999 "blocks": 5632, 00:17:12.999 "percent": 70 00:17:12.999 } 00:17:12.999 }, 00:17:12.999 "base_bdevs_list": [ 00:17:12.999 { 00:17:12.999 "name": "spare", 00:17:12.999 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:12.999 "is_configured": true, 00:17:12.999 "data_offset": 256, 00:17:12.999 "data_size": 7936 00:17:12.999 }, 00:17:12.999 { 00:17:12.999 "name": "BaseBdev2", 00:17:12.999 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:12.999 "is_configured": true, 00:17:12.999 "data_offset": 256, 00:17:12.999 "data_size": 7936 00:17:12.999 } 00:17:12.999 ] 00:17:12.999 }' 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.999 02:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.569 [2024-11-28 02:32:47.070554] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:13.569 [2024-11-28 02:32:47.070637] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:13.569 [2024-11-28 02:32:47.070729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.829 "name": "raid_bdev1", 00:17:13.829 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:13.829 "strip_size_kb": 0, 00:17:13.829 "state": "online", 00:17:13.829 "raid_level": "raid1", 00:17:13.829 "superblock": true, 00:17:13.829 "num_base_bdevs": 2, 00:17:13.829 "num_base_bdevs_discovered": 2, 00:17:13.829 "num_base_bdevs_operational": 2, 00:17:13.829 "base_bdevs_list": [ 00:17:13.829 { 00:17:13.829 "name": "spare", 00:17:13.829 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:13.829 "is_configured": true, 00:17:13.829 "data_offset": 256, 00:17:13.829 "data_size": 7936 00:17:13.829 }, 00:17:13.829 { 00:17:13.829 "name": "BaseBdev2", 00:17:13.829 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:13.829 "is_configured": true, 00:17:13.829 "data_offset": 256, 00:17:13.829 "data_size": 7936 00:17:13.829 } 00:17:13.829 ] 00:17:13.829 }' 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:13.829 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.089 "name": "raid_bdev1", 00:17:14.089 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:14.089 "strip_size_kb": 0, 00:17:14.089 "state": "online", 00:17:14.089 "raid_level": "raid1", 00:17:14.089 "superblock": true, 00:17:14.089 "num_base_bdevs": 2, 00:17:14.089 "num_base_bdevs_discovered": 2, 00:17:14.089 "num_base_bdevs_operational": 2, 00:17:14.089 "base_bdevs_list": [ 00:17:14.089 { 00:17:14.089 "name": "spare", 00:17:14.089 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:14.089 "is_configured": true, 00:17:14.089 "data_offset": 256, 00:17:14.089 "data_size": 7936 00:17:14.089 }, 00:17:14.089 { 00:17:14.089 "name": "BaseBdev2", 00:17:14.089 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:14.089 "is_configured": true, 00:17:14.089 "data_offset": 256, 00:17:14.089 "data_size": 7936 00:17:14.089 } 00:17:14.089 ] 00:17:14.089 }' 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.089 "name": "raid_bdev1", 00:17:14.089 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:14.089 "strip_size_kb": 0, 00:17:14.089 "state": "online", 00:17:14.089 "raid_level": "raid1", 00:17:14.089 "superblock": true, 00:17:14.089 "num_base_bdevs": 2, 00:17:14.089 "num_base_bdevs_discovered": 2, 00:17:14.089 "num_base_bdevs_operational": 2, 00:17:14.089 "base_bdevs_list": [ 00:17:14.089 { 00:17:14.089 "name": "spare", 00:17:14.089 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:14.089 "is_configured": true, 00:17:14.089 "data_offset": 256, 00:17:14.089 "data_size": 7936 00:17:14.089 }, 00:17:14.089 { 00:17:14.089 "name": "BaseBdev2", 00:17:14.089 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:14.089 "is_configured": true, 00:17:14.089 "data_offset": 256, 00:17:14.089 "data_size": 7936 00:17:14.089 } 00:17:14.089 ] 00:17:14.089 }' 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.089 02:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.659 [2024-11-28 02:32:48.116442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.659 [2024-11-28 02:32:48.116475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.659 [2024-11-28 02:32:48.116563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.659 [2024-11-28 02:32:48.116629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.659 [2024-11-28 02:32:48.116643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:14.659 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:14.660 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:14.920 /dev/nbd0 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.920 1+0 records in 00:17:14.920 1+0 records out 00:17:14.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411158 s, 10.0 MB/s 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:14.920 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:14.920 /dev/nbd1 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.180 1+0 records in 00:17:15.180 1+0 records out 00:17:15.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289928 s, 14.1 MB/s 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.180 02:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:15.440 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:15.440 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:15.440 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:15.440 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.440 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.440 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:15.440 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:15.440 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.440 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.440 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.701 [2024-11-28 02:32:49.265210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:15.701 [2024-11-28 02:32:49.265276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.701 [2024-11-28 02:32:49.265312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:15.701 [2024-11-28 02:32:49.265321] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.701 [2024-11-28 02:32:49.267239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.701 [2024-11-28 02:32:49.267272] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:15.701 [2024-11-28 02:32:49.267332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:15.701 [2024-11-28 02:32:49.267379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.701 [2024-11-28 02:32:49.267540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.701 spare 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.701 [2024-11-28 02:32:49.367437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:15.701 [2024-11-28 02:32:49.367466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:15.701 [2024-11-28 02:32:49.367553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:15.701 [2024-11-28 02:32:49.367687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:15.701 [2024-11-28 02:32:49.367703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:15.701 [2024-11-28 02:32:49.367825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.701 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.961 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.961 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.961 "name": "raid_bdev1", 00:17:15.961 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:15.961 "strip_size_kb": 0, 00:17:15.961 "state": "online", 00:17:15.961 "raid_level": "raid1", 00:17:15.961 "superblock": true, 00:17:15.961 "num_base_bdevs": 2, 00:17:15.961 "num_base_bdevs_discovered": 2, 00:17:15.961 "num_base_bdevs_operational": 2, 00:17:15.961 "base_bdevs_list": [ 00:17:15.961 { 00:17:15.961 "name": "spare", 00:17:15.961 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:15.961 "is_configured": true, 00:17:15.961 "data_offset": 256, 00:17:15.961 "data_size": 7936 00:17:15.961 }, 00:17:15.961 { 00:17:15.961 "name": "BaseBdev2", 00:17:15.961 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:15.961 "is_configured": true, 00:17:15.961 "data_offset": 256, 00:17:15.961 "data_size": 7936 00:17:15.961 } 00:17:15.961 ] 00:17:15.961 }' 00:17:15.961 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.961 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.222 "name": "raid_bdev1", 00:17:16.222 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:16.222 "strip_size_kb": 0, 00:17:16.222 "state": "online", 00:17:16.222 "raid_level": "raid1", 00:17:16.222 "superblock": true, 00:17:16.222 "num_base_bdevs": 2, 00:17:16.222 "num_base_bdevs_discovered": 2, 00:17:16.222 "num_base_bdevs_operational": 2, 00:17:16.222 "base_bdevs_list": [ 00:17:16.222 { 00:17:16.222 "name": "spare", 00:17:16.222 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:16.222 "is_configured": true, 00:17:16.222 "data_offset": 256, 00:17:16.222 "data_size": 7936 00:17:16.222 }, 00:17:16.222 { 00:17:16.222 "name": "BaseBdev2", 00:17:16.222 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:16.222 "is_configured": true, 00:17:16.222 "data_offset": 256, 00:17:16.222 "data_size": 7936 00:17:16.222 } 00:17:16.222 ] 00:17:16.222 }' 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.222 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.482 [2024-11-28 02:32:49.964091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.482 02:32:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.482 02:32:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.482 "name": "raid_bdev1", 00:17:16.482 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:16.482 "strip_size_kb": 0, 00:17:16.482 "state": "online", 00:17:16.482 "raid_level": "raid1", 00:17:16.482 "superblock": true, 00:17:16.482 "num_base_bdevs": 2, 00:17:16.482 "num_base_bdevs_discovered": 1, 00:17:16.482 "num_base_bdevs_operational": 1, 00:17:16.482 "base_bdevs_list": [ 00:17:16.482 { 00:17:16.482 "name": null, 00:17:16.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.482 "is_configured": false, 00:17:16.482 "data_offset": 0, 00:17:16.482 "data_size": 7936 00:17:16.482 }, 00:17:16.482 { 00:17:16.482 "name": "BaseBdev2", 00:17:16.482 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:16.482 "is_configured": true, 00:17:16.482 "data_offset": 256, 00:17:16.482 "data_size": 7936 00:17:16.482 } 00:17:16.482 ] 00:17:16.482 }' 00:17:16.483 02:32:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.483 02:32:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.052 02:32:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.052 02:32:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.052 02:32:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.053 [2024-11-28 02:32:50.431314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.053 [2024-11-28 02:32:50.431517] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:17.053 [2024-11-28 02:32:50.431542] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:17.053 [2024-11-28 02:32:50.431576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.053 [2024-11-28 02:32:50.445056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:17.053 02:32:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.053 02:32:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:17.053 [2024-11-28 02:32:50.446809] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.993 "name": "raid_bdev1", 00:17:17.993 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:17.993 "strip_size_kb": 0, 00:17:17.993 "state": "online", 00:17:17.993 "raid_level": "raid1", 00:17:17.993 "superblock": true, 00:17:17.993 "num_base_bdevs": 2, 00:17:17.993 "num_base_bdevs_discovered": 2, 00:17:17.993 "num_base_bdevs_operational": 2, 00:17:17.993 "process": { 00:17:17.993 "type": "rebuild", 00:17:17.993 "target": "spare", 00:17:17.993 "progress": { 00:17:17.993 "blocks": 2560, 00:17:17.993 "percent": 32 00:17:17.993 } 00:17:17.993 }, 00:17:17.993 "base_bdevs_list": [ 00:17:17.993 { 00:17:17.993 "name": "spare", 00:17:17.993 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:17.993 "is_configured": true, 00:17:17.993 "data_offset": 256, 00:17:17.993 "data_size": 7936 00:17:17.993 }, 00:17:17.993 { 00:17:17.993 "name": "BaseBdev2", 00:17:17.993 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:17.993 "is_configured": true, 00:17:17.993 "data_offset": 256, 00:17:17.993 "data_size": 7936 00:17:17.993 } 00:17:17.993 ] 00:17:17.993 }' 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.993 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.993 [2024-11-28 02:32:51.587894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.993 [2024-11-28 02:32:51.651404] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.993 [2024-11-28 02:32:51.651459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.993 [2024-11-28 02:32:51.651489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.993 [2024-11-28 02:32:51.651506] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:18.253 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.253 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.253 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.253 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.253 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.253 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.254 "name": "raid_bdev1", 00:17:18.254 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:18.254 "strip_size_kb": 0, 00:17:18.254 "state": "online", 00:17:18.254 "raid_level": "raid1", 00:17:18.254 "superblock": true, 00:17:18.254 "num_base_bdevs": 2, 00:17:18.254 "num_base_bdevs_discovered": 1, 00:17:18.254 "num_base_bdevs_operational": 1, 00:17:18.254 "base_bdevs_list": [ 00:17:18.254 { 00:17:18.254 "name": null, 00:17:18.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.254 "is_configured": false, 00:17:18.254 "data_offset": 0, 00:17:18.254 "data_size": 7936 00:17:18.254 }, 00:17:18.254 { 00:17:18.254 "name": "BaseBdev2", 00:17:18.254 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:18.254 "is_configured": true, 00:17:18.254 "data_offset": 256, 00:17:18.254 "data_size": 7936 00:17:18.254 } 00:17:18.254 ] 00:17:18.254 }' 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.254 02:32:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.514 02:32:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:18.514 02:32:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.514 02:32:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.514 [2024-11-28 02:32:52.086027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:18.514 [2024-11-28 02:32:52.086097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.514 [2024-11-28 02:32:52.086120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:18.514 [2024-11-28 02:32:52.086130] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.514 [2024-11-28 02:32:52.086360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.514 [2024-11-28 02:32:52.086385] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:18.514 [2024-11-28 02:32:52.086433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:18.514 [2024-11-28 02:32:52.086446] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:18.514 [2024-11-28 02:32:52.086455] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:18.514 [2024-11-28 02:32:52.086491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.514 [2024-11-28 02:32:52.100279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:18.514 spare 00:17:18.514 02:32:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.514 02:32:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:18.514 [2024-11-28 02:32:52.102017] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.455 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.455 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.455 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.455 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.455 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.455 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.455 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.455 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.455 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.455 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.714 "name": "raid_bdev1", 00:17:19.714 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:19.714 "strip_size_kb": 0, 00:17:19.714 "state": "online", 00:17:19.714 "raid_level": "raid1", 00:17:19.714 "superblock": true, 00:17:19.714 "num_base_bdevs": 2, 00:17:19.714 "num_base_bdevs_discovered": 2, 00:17:19.714 "num_base_bdevs_operational": 2, 00:17:19.714 "process": { 00:17:19.714 "type": "rebuild", 00:17:19.714 "target": "spare", 00:17:19.714 "progress": { 00:17:19.714 "blocks": 2560, 00:17:19.714 "percent": 32 00:17:19.714 } 00:17:19.714 }, 00:17:19.714 "base_bdevs_list": [ 00:17:19.714 { 00:17:19.714 "name": "spare", 00:17:19.714 "uuid": "40dce6f9-cf6e-5aca-af2b-0f116cb9664d", 00:17:19.714 "is_configured": true, 00:17:19.714 "data_offset": 256, 00:17:19.714 "data_size": 7936 00:17:19.714 }, 00:17:19.714 { 00:17:19.714 "name": "BaseBdev2", 00:17:19.714 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:19.714 "is_configured": true, 00:17:19.714 "data_offset": 256, 00:17:19.714 "data_size": 7936 00:17:19.714 } 00:17:19.714 ] 00:17:19.714 }' 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.714 [2024-11-28 02:32:53.234287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.714 [2024-11-28 02:32:53.306589] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:19.714 [2024-11-28 02:32:53.306642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.714 [2024-11-28 02:32:53.306674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.714 [2024-11-28 02:32:53.306680] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.714 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.715 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.715 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.715 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.715 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.715 "name": "raid_bdev1", 00:17:19.715 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:19.715 "strip_size_kb": 0, 00:17:19.715 "state": "online", 00:17:19.715 "raid_level": "raid1", 00:17:19.715 "superblock": true, 00:17:19.715 "num_base_bdevs": 2, 00:17:19.715 "num_base_bdevs_discovered": 1, 00:17:19.715 "num_base_bdevs_operational": 1, 00:17:19.715 "base_bdevs_list": [ 00:17:19.715 { 00:17:19.715 "name": null, 00:17:19.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.715 "is_configured": false, 00:17:19.715 "data_offset": 0, 00:17:19.715 "data_size": 7936 00:17:19.715 }, 00:17:19.715 { 00:17:19.715 "name": "BaseBdev2", 00:17:19.715 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:19.715 "is_configured": true, 00:17:19.715 "data_offset": 256, 00:17:19.715 "data_size": 7936 00:17:19.715 } 00:17:19.715 ] 00:17:19.715 }' 00:17:19.715 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.715 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.284 "name": "raid_bdev1", 00:17:20.284 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:20.284 "strip_size_kb": 0, 00:17:20.284 "state": "online", 00:17:20.284 "raid_level": "raid1", 00:17:20.284 "superblock": true, 00:17:20.284 "num_base_bdevs": 2, 00:17:20.284 "num_base_bdevs_discovered": 1, 00:17:20.284 "num_base_bdevs_operational": 1, 00:17:20.284 "base_bdevs_list": [ 00:17:20.284 { 00:17:20.284 "name": null, 00:17:20.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.284 "is_configured": false, 00:17:20.284 "data_offset": 0, 00:17:20.284 "data_size": 7936 00:17:20.284 }, 00:17:20.284 { 00:17:20.284 "name": "BaseBdev2", 00:17:20.284 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:20.284 "is_configured": true, 00:17:20.284 "data_offset": 256, 00:17:20.284 "data_size": 7936 00:17:20.284 } 00:17:20.284 ] 00:17:20.284 }' 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.284 [2024-11-28 02:32:53.912914] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:20.284 [2024-11-28 02:32:53.912972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.284 [2024-11-28 02:32:53.913008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:20.284 [2024-11-28 02:32:53.913017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.284 [2024-11-28 02:32:53.913239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.284 [2024-11-28 02:32:53.913269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:20.284 [2024-11-28 02:32:53.913319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:20.284 [2024-11-28 02:32:53.913337] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:20.284 [2024-11-28 02:32:53.913346] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:20.284 [2024-11-28 02:32:53.913356] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:20.284 BaseBdev1 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.284 02:32:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.665 "name": "raid_bdev1", 00:17:21.665 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:21.665 "strip_size_kb": 0, 00:17:21.665 "state": "online", 00:17:21.665 "raid_level": "raid1", 00:17:21.665 "superblock": true, 00:17:21.665 "num_base_bdevs": 2, 00:17:21.665 "num_base_bdevs_discovered": 1, 00:17:21.665 "num_base_bdevs_operational": 1, 00:17:21.665 "base_bdevs_list": [ 00:17:21.665 { 00:17:21.665 "name": null, 00:17:21.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.665 "is_configured": false, 00:17:21.665 "data_offset": 0, 00:17:21.665 "data_size": 7936 00:17:21.665 }, 00:17:21.665 { 00:17:21.665 "name": "BaseBdev2", 00:17:21.665 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:21.665 "is_configured": true, 00:17:21.665 "data_offset": 256, 00:17:21.665 "data_size": 7936 00:17:21.665 } 00:17:21.665 ] 00:17:21.665 }' 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.665 02:32:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.925 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.926 "name": "raid_bdev1", 00:17:21.926 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:21.926 "strip_size_kb": 0, 00:17:21.926 "state": "online", 00:17:21.926 "raid_level": "raid1", 00:17:21.926 "superblock": true, 00:17:21.926 "num_base_bdevs": 2, 00:17:21.926 "num_base_bdevs_discovered": 1, 00:17:21.926 "num_base_bdevs_operational": 1, 00:17:21.926 "base_bdevs_list": [ 00:17:21.926 { 00:17:21.926 "name": null, 00:17:21.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.926 "is_configured": false, 00:17:21.926 "data_offset": 0, 00:17:21.926 "data_size": 7936 00:17:21.926 }, 00:17:21.926 { 00:17:21.926 "name": "BaseBdev2", 00:17:21.926 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:21.926 "is_configured": true, 00:17:21.926 "data_offset": 256, 00:17:21.926 "data_size": 7936 00:17:21.926 } 00:17:21.926 ] 00:17:21.926 }' 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.926 [2024-11-28 02:32:55.538141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.926 [2024-11-28 02:32:55.538296] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:21.926 [2024-11-28 02:32:55.538313] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:21.926 request: 00:17:21.926 { 00:17:21.926 "base_bdev": "BaseBdev1", 00:17:21.926 "raid_bdev": "raid_bdev1", 00:17:21.926 "method": "bdev_raid_add_base_bdev", 00:17:21.926 "req_id": 1 00:17:21.926 } 00:17:21.926 Got JSON-RPC error response 00:17:21.926 response: 00:17:21.926 { 00:17:21.926 "code": -22, 00:17:21.926 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:21.926 } 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.926 02:32:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.308 "name": "raid_bdev1", 00:17:23.308 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:23.308 "strip_size_kb": 0, 00:17:23.308 "state": "online", 00:17:23.308 "raid_level": "raid1", 00:17:23.308 "superblock": true, 00:17:23.308 "num_base_bdevs": 2, 00:17:23.308 "num_base_bdevs_discovered": 1, 00:17:23.308 "num_base_bdevs_operational": 1, 00:17:23.308 "base_bdevs_list": [ 00:17:23.308 { 00:17:23.308 "name": null, 00:17:23.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.308 "is_configured": false, 00:17:23.308 "data_offset": 0, 00:17:23.308 "data_size": 7936 00:17:23.308 }, 00:17:23.308 { 00:17:23.308 "name": "BaseBdev2", 00:17:23.308 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:23.308 "is_configured": true, 00:17:23.308 "data_offset": 256, 00:17:23.308 "data_size": 7936 00:17:23.308 } 00:17:23.308 ] 00:17:23.308 }' 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.308 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.568 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.568 "name": "raid_bdev1", 00:17:23.568 "uuid": "2eb9abf9-86d5-43c8-9c46-17798eacf3b0", 00:17:23.568 "strip_size_kb": 0, 00:17:23.568 "state": "online", 00:17:23.568 "raid_level": "raid1", 00:17:23.568 "superblock": true, 00:17:23.568 "num_base_bdevs": 2, 00:17:23.568 "num_base_bdevs_discovered": 1, 00:17:23.568 "num_base_bdevs_operational": 1, 00:17:23.568 "base_bdevs_list": [ 00:17:23.568 { 00:17:23.568 "name": null, 00:17:23.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.568 "is_configured": false, 00:17:23.568 "data_offset": 0, 00:17:23.568 "data_size": 7936 00:17:23.568 }, 00:17:23.568 { 00:17:23.568 "name": "BaseBdev2", 00:17:23.568 "uuid": "e503e480-1570-5c73-b975-c24437e09aba", 00:17:23.568 "is_configured": true, 00:17:23.568 "data_offset": 256, 00:17:23.568 "data_size": 7936 00:17:23.568 } 00:17:23.568 ] 00:17:23.568 }' 00:17:23.568 02:32:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.568 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87499 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87499 ']' 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87499 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87499 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87499' 00:17:23.569 killing process with pid 87499 00:17:23.569 Received shutdown signal, test time was about 60.000000 seconds 00:17:23.569 00:17:23.569 Latency(us) 00:17:23.569 [2024-11-28T02:32:57.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.569 [2024-11-28T02:32:57.248Z] =================================================================================================================== 00:17:23.569 [2024-11-28T02:32:57.248Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87499 00:17:23.569 [2024-11-28 02:32:57.123956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:23.569 [2024-11-28 02:32:57.124076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.569 [2024-11-28 02:32:57.124131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.569 [2024-11-28 02:32:57.124143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:23.569 02:32:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87499 00:17:23.828 [2024-11-28 02:32:57.431930] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:25.211 02:32:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:25.211 00:17:25.211 real 0m19.542s 00:17:25.211 user 0m25.528s 00:17:25.211 sys 0m2.525s 00:17:25.211 ************************************ 00:17:25.211 END TEST raid_rebuild_test_sb_md_separate 00:17:25.211 ************************************ 00:17:25.211 02:32:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.211 02:32:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.211 02:32:58 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:25.211 02:32:58 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:25.211 02:32:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:25.211 02:32:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.211 02:32:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.211 ************************************ 00:17:25.211 START TEST raid_state_function_test_sb_md_interleaved 00:17:25.211 ************************************ 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:25.211 Process raid pid: 88187 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88187 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88187' 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88187 00:17:25.211 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88187 ']' 00:17:25.212 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.212 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.212 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.212 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.212 02:32:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:25.212 [2024-11-28 02:32:58.646821] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:25.212 [2024-11-28 02:32:58.647069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.212 [2024-11-28 02:32:58.819759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.471 [2024-11-28 02:32:58.925889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.472 [2024-11-28 02:32:59.114663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.472 [2024-11-28 02:32:59.114740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.049 [2024-11-28 02:32:59.466997] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:26.049 [2024-11-28 02:32:59.467107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:26.049 [2024-11-28 02:32:59.467139] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.049 [2024-11-28 02:32:59.467163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.049 "name": "Existed_Raid", 00:17:26.049 "uuid": "aad7d949-4d2f-4fa3-8654-5a5ef2feee6c", 00:17:26.049 "strip_size_kb": 0, 00:17:26.049 "state": "configuring", 00:17:26.049 "raid_level": "raid1", 00:17:26.049 "superblock": true, 00:17:26.049 "num_base_bdevs": 2, 00:17:26.049 "num_base_bdevs_discovered": 0, 00:17:26.049 "num_base_bdevs_operational": 2, 00:17:26.049 "base_bdevs_list": [ 00:17:26.049 { 00:17:26.049 "name": "BaseBdev1", 00:17:26.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.049 "is_configured": false, 00:17:26.049 "data_offset": 0, 00:17:26.049 "data_size": 0 00:17:26.049 }, 00:17:26.049 { 00:17:26.049 "name": "BaseBdev2", 00:17:26.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.049 "is_configured": false, 00:17:26.049 "data_offset": 0, 00:17:26.049 "data_size": 0 00:17:26.049 } 00:17:26.049 ] 00:17:26.049 }' 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.049 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 [2024-11-28 02:32:59.914122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.308 [2024-11-28 02:32:59.914190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 [2024-11-28 02:32:59.926112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:26.308 [2024-11-28 02:32:59.926185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:26.308 [2024-11-28 02:32:59.926226] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.308 [2024-11-28 02:32:59.926250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 [2024-11-28 02:32:59.967184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.308 BaseBdev1 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 02:32:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.568 [ 00:17:26.568 { 00:17:26.568 "name": "BaseBdev1", 00:17:26.568 "aliases": [ 00:17:26.568 "20cb27f8-4ead-416e-9fc8-9fe98a7e1d72" 00:17:26.568 ], 00:17:26.568 "product_name": "Malloc disk", 00:17:26.568 "block_size": 4128, 00:17:26.568 "num_blocks": 8192, 00:17:26.568 "uuid": "20cb27f8-4ead-416e-9fc8-9fe98a7e1d72", 00:17:26.568 "md_size": 32, 00:17:26.568 "md_interleave": true, 00:17:26.568 "dif_type": 0, 00:17:26.568 "assigned_rate_limits": { 00:17:26.568 "rw_ios_per_sec": 0, 00:17:26.568 "rw_mbytes_per_sec": 0, 00:17:26.568 "r_mbytes_per_sec": 0, 00:17:26.568 "w_mbytes_per_sec": 0 00:17:26.568 }, 00:17:26.568 "claimed": true, 00:17:26.568 "claim_type": "exclusive_write", 00:17:26.568 "zoned": false, 00:17:26.568 "supported_io_types": { 00:17:26.568 "read": true, 00:17:26.568 "write": true, 00:17:26.568 "unmap": true, 00:17:26.568 "flush": true, 00:17:26.568 "reset": true, 00:17:26.568 "nvme_admin": false, 00:17:26.568 "nvme_io": false, 00:17:26.568 "nvme_io_md": false, 00:17:26.568 "write_zeroes": true, 00:17:26.568 "zcopy": true, 00:17:26.568 "get_zone_info": false, 00:17:26.568 "zone_management": false, 00:17:26.568 "zone_append": false, 00:17:26.568 "compare": false, 00:17:26.568 "compare_and_write": false, 00:17:26.568 "abort": true, 00:17:26.568 "seek_hole": false, 00:17:26.568 "seek_data": false, 00:17:26.568 "copy": true, 00:17:26.568 "nvme_iov_md": false 00:17:26.568 }, 00:17:26.568 "memory_domains": [ 00:17:26.568 { 00:17:26.568 "dma_device_id": "system", 00:17:26.568 "dma_device_type": 1 00:17:26.568 }, 00:17:26.568 { 00:17:26.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.568 "dma_device_type": 2 00:17:26.568 } 00:17:26.568 ], 00:17:26.568 "driver_specific": {} 00:17:26.568 } 00:17:26.568 ] 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.568 "name": "Existed_Raid", 00:17:26.568 "uuid": "7a48322e-e970-4316-bad9-7fbe7b935672", 00:17:26.568 "strip_size_kb": 0, 00:17:26.568 "state": "configuring", 00:17:26.568 "raid_level": "raid1", 00:17:26.568 "superblock": true, 00:17:26.568 "num_base_bdevs": 2, 00:17:26.568 "num_base_bdevs_discovered": 1, 00:17:26.568 "num_base_bdevs_operational": 2, 00:17:26.568 "base_bdevs_list": [ 00:17:26.568 { 00:17:26.568 "name": "BaseBdev1", 00:17:26.568 "uuid": "20cb27f8-4ead-416e-9fc8-9fe98a7e1d72", 00:17:26.568 "is_configured": true, 00:17:26.568 "data_offset": 256, 00:17:26.568 "data_size": 7936 00:17:26.568 }, 00:17:26.568 { 00:17:26.568 "name": "BaseBdev2", 00:17:26.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.568 "is_configured": false, 00:17:26.568 "data_offset": 0, 00:17:26.568 "data_size": 0 00:17:26.568 } 00:17:26.568 ] 00:17:26.568 }' 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.568 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.829 [2024-11-28 02:33:00.434456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.829 [2024-11-28 02:33:00.434545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.829 [2024-11-28 02:33:00.446481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.829 [2024-11-28 02:33:00.448341] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.829 [2024-11-28 02:33:00.448420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.829 "name": "Existed_Raid", 00:17:26.829 "uuid": "e023b1ca-5e41-497e-bf22-ea43950c1129", 00:17:26.829 "strip_size_kb": 0, 00:17:26.829 "state": "configuring", 00:17:26.829 "raid_level": "raid1", 00:17:26.829 "superblock": true, 00:17:26.829 "num_base_bdevs": 2, 00:17:26.829 "num_base_bdevs_discovered": 1, 00:17:26.829 "num_base_bdevs_operational": 2, 00:17:26.829 "base_bdevs_list": [ 00:17:26.829 { 00:17:26.829 "name": "BaseBdev1", 00:17:26.829 "uuid": "20cb27f8-4ead-416e-9fc8-9fe98a7e1d72", 00:17:26.829 "is_configured": true, 00:17:26.829 "data_offset": 256, 00:17:26.829 "data_size": 7936 00:17:26.829 }, 00:17:26.829 { 00:17:26.829 "name": "BaseBdev2", 00:17:26.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.829 "is_configured": false, 00:17:26.829 "data_offset": 0, 00:17:26.829 "data_size": 0 00:17:26.829 } 00:17:26.829 ] 00:17:26.829 }' 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.829 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.435 [2024-11-28 02:33:00.949699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.435 [2024-11-28 02:33:00.950033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:27.435 [2024-11-28 02:33:00.950070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:27.435 [2024-11-28 02:33:00.950206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:27.435 [2024-11-28 02:33:00.950309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:27.435 [2024-11-28 02:33:00.950345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:27.435 [2024-11-28 02:33:00.950439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.435 BaseBdev2 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.435 [ 00:17:27.435 { 00:17:27.435 "name": "BaseBdev2", 00:17:27.435 "aliases": [ 00:17:27.435 "ded9d0c7-4c72-4869-ad9a-76bab2f9485b" 00:17:27.435 ], 00:17:27.435 "product_name": "Malloc disk", 00:17:27.435 "block_size": 4128, 00:17:27.435 "num_blocks": 8192, 00:17:27.435 "uuid": "ded9d0c7-4c72-4869-ad9a-76bab2f9485b", 00:17:27.435 "md_size": 32, 00:17:27.435 "md_interleave": true, 00:17:27.435 "dif_type": 0, 00:17:27.435 "assigned_rate_limits": { 00:17:27.435 "rw_ios_per_sec": 0, 00:17:27.435 "rw_mbytes_per_sec": 0, 00:17:27.435 "r_mbytes_per_sec": 0, 00:17:27.435 "w_mbytes_per_sec": 0 00:17:27.435 }, 00:17:27.435 "claimed": true, 00:17:27.435 "claim_type": "exclusive_write", 00:17:27.435 "zoned": false, 00:17:27.435 "supported_io_types": { 00:17:27.435 "read": true, 00:17:27.435 "write": true, 00:17:27.435 "unmap": true, 00:17:27.435 "flush": true, 00:17:27.435 "reset": true, 00:17:27.435 "nvme_admin": false, 00:17:27.435 "nvme_io": false, 00:17:27.435 "nvme_io_md": false, 00:17:27.435 "write_zeroes": true, 00:17:27.435 "zcopy": true, 00:17:27.435 "get_zone_info": false, 00:17:27.435 "zone_management": false, 00:17:27.435 "zone_append": false, 00:17:27.435 "compare": false, 00:17:27.435 "compare_and_write": false, 00:17:27.435 "abort": true, 00:17:27.435 "seek_hole": false, 00:17:27.435 "seek_data": false, 00:17:27.435 "copy": true, 00:17:27.435 "nvme_iov_md": false 00:17:27.435 }, 00:17:27.435 "memory_domains": [ 00:17:27.435 { 00:17:27.435 "dma_device_id": "system", 00:17:27.435 "dma_device_type": 1 00:17:27.435 }, 00:17:27.435 { 00:17:27.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.435 "dma_device_type": 2 00:17:27.435 } 00:17:27.435 ], 00:17:27.435 "driver_specific": {} 00:17:27.435 } 00:17:27.435 ] 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:27.435 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.436 02:33:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.436 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.436 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.436 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.436 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.436 "name": "Existed_Raid", 00:17:27.436 "uuid": "e023b1ca-5e41-497e-bf22-ea43950c1129", 00:17:27.436 "strip_size_kb": 0, 00:17:27.436 "state": "online", 00:17:27.436 "raid_level": "raid1", 00:17:27.436 "superblock": true, 00:17:27.436 "num_base_bdevs": 2, 00:17:27.436 "num_base_bdevs_discovered": 2, 00:17:27.436 "num_base_bdevs_operational": 2, 00:17:27.436 "base_bdevs_list": [ 00:17:27.436 { 00:17:27.436 "name": "BaseBdev1", 00:17:27.436 "uuid": "20cb27f8-4ead-416e-9fc8-9fe98a7e1d72", 00:17:27.436 "is_configured": true, 00:17:27.436 "data_offset": 256, 00:17:27.436 "data_size": 7936 00:17:27.436 }, 00:17:27.436 { 00:17:27.436 "name": "BaseBdev2", 00:17:27.436 "uuid": "ded9d0c7-4c72-4869-ad9a-76bab2f9485b", 00:17:27.436 "is_configured": true, 00:17:27.436 "data_offset": 256, 00:17:27.436 "data_size": 7936 00:17:27.436 } 00:17:27.436 ] 00:17:27.436 }' 00:17:27.436 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.436 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.027 [2024-11-28 02:33:01.477190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:28.027 "name": "Existed_Raid", 00:17:28.027 "aliases": [ 00:17:28.027 "e023b1ca-5e41-497e-bf22-ea43950c1129" 00:17:28.027 ], 00:17:28.027 "product_name": "Raid Volume", 00:17:28.027 "block_size": 4128, 00:17:28.027 "num_blocks": 7936, 00:17:28.027 "uuid": "e023b1ca-5e41-497e-bf22-ea43950c1129", 00:17:28.027 "md_size": 32, 00:17:28.027 "md_interleave": true, 00:17:28.027 "dif_type": 0, 00:17:28.027 "assigned_rate_limits": { 00:17:28.027 "rw_ios_per_sec": 0, 00:17:28.027 "rw_mbytes_per_sec": 0, 00:17:28.027 "r_mbytes_per_sec": 0, 00:17:28.027 "w_mbytes_per_sec": 0 00:17:28.027 }, 00:17:28.027 "claimed": false, 00:17:28.027 "zoned": false, 00:17:28.027 "supported_io_types": { 00:17:28.027 "read": true, 00:17:28.027 "write": true, 00:17:28.027 "unmap": false, 00:17:28.027 "flush": false, 00:17:28.027 "reset": true, 00:17:28.027 "nvme_admin": false, 00:17:28.027 "nvme_io": false, 00:17:28.027 "nvme_io_md": false, 00:17:28.027 "write_zeroes": true, 00:17:28.027 "zcopy": false, 00:17:28.027 "get_zone_info": false, 00:17:28.027 "zone_management": false, 00:17:28.027 "zone_append": false, 00:17:28.027 "compare": false, 00:17:28.027 "compare_and_write": false, 00:17:28.027 "abort": false, 00:17:28.027 "seek_hole": false, 00:17:28.027 "seek_data": false, 00:17:28.027 "copy": false, 00:17:28.027 "nvme_iov_md": false 00:17:28.027 }, 00:17:28.027 "memory_domains": [ 00:17:28.027 { 00:17:28.027 "dma_device_id": "system", 00:17:28.027 "dma_device_type": 1 00:17:28.027 }, 00:17:28.027 { 00:17:28.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.027 "dma_device_type": 2 00:17:28.027 }, 00:17:28.027 { 00:17:28.027 "dma_device_id": "system", 00:17:28.027 "dma_device_type": 1 00:17:28.027 }, 00:17:28.027 { 00:17:28.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.027 "dma_device_type": 2 00:17:28.027 } 00:17:28.027 ], 00:17:28.027 "driver_specific": { 00:17:28.027 "raid": { 00:17:28.027 "uuid": "e023b1ca-5e41-497e-bf22-ea43950c1129", 00:17:28.027 "strip_size_kb": 0, 00:17:28.027 "state": "online", 00:17:28.027 "raid_level": "raid1", 00:17:28.027 "superblock": true, 00:17:28.027 "num_base_bdevs": 2, 00:17:28.027 "num_base_bdevs_discovered": 2, 00:17:28.027 "num_base_bdevs_operational": 2, 00:17:28.027 "base_bdevs_list": [ 00:17:28.027 { 00:17:28.027 "name": "BaseBdev1", 00:17:28.027 "uuid": "20cb27f8-4ead-416e-9fc8-9fe98a7e1d72", 00:17:28.027 "is_configured": true, 00:17:28.027 "data_offset": 256, 00:17:28.027 "data_size": 7936 00:17:28.027 }, 00:17:28.027 { 00:17:28.027 "name": "BaseBdev2", 00:17:28.027 "uuid": "ded9d0c7-4c72-4869-ad9a-76bab2f9485b", 00:17:28.027 "is_configured": true, 00:17:28.027 "data_offset": 256, 00:17:28.027 "data_size": 7936 00:17:28.027 } 00:17:28.027 ] 00:17:28.027 } 00:17:28.027 } 00:17:28.027 }' 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:28.027 BaseBdev2' 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.027 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.288 [2024-11-28 02:33:01.716479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.288 "name": "Existed_Raid", 00:17:28.288 "uuid": "e023b1ca-5e41-497e-bf22-ea43950c1129", 00:17:28.288 "strip_size_kb": 0, 00:17:28.288 "state": "online", 00:17:28.288 "raid_level": "raid1", 00:17:28.288 "superblock": true, 00:17:28.288 "num_base_bdevs": 2, 00:17:28.288 "num_base_bdevs_discovered": 1, 00:17:28.288 "num_base_bdevs_operational": 1, 00:17:28.288 "base_bdevs_list": [ 00:17:28.288 { 00:17:28.288 "name": null, 00:17:28.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.288 "is_configured": false, 00:17:28.288 "data_offset": 0, 00:17:28.288 "data_size": 7936 00:17:28.288 }, 00:17:28.288 { 00:17:28.288 "name": "BaseBdev2", 00:17:28.288 "uuid": "ded9d0c7-4c72-4869-ad9a-76bab2f9485b", 00:17:28.288 "is_configured": true, 00:17:28.288 "data_offset": 256, 00:17:28.288 "data_size": 7936 00:17:28.288 } 00:17:28.288 ] 00:17:28.288 }' 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.288 02:33:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.856 [2024-11-28 02:33:02.284175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.856 [2024-11-28 02:33:02.284316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.856 [2024-11-28 02:33:02.371678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.856 [2024-11-28 02:33:02.371814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.856 [2024-11-28 02:33:02.371856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88187 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88187 ']' 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88187 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:28.856 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.857 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88187 00:17:28.857 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.857 killing process with pid 88187 00:17:28.857 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.857 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88187' 00:17:28.857 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88187 00:17:28.857 [2024-11-28 02:33:02.471800] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.857 02:33:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88187 00:17:28.857 [2024-11-28 02:33:02.488086] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.233 02:33:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:30.233 00:17:30.233 real 0m4.977s 00:17:30.233 user 0m7.259s 00:17:30.233 sys 0m0.837s 00:17:30.233 02:33:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.233 ************************************ 00:17:30.233 END TEST raid_state_function_test_sb_md_interleaved 00:17:30.233 ************************************ 00:17:30.233 02:33:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.233 02:33:03 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:30.233 02:33:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:30.233 02:33:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.233 02:33:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.233 ************************************ 00:17:30.233 START TEST raid_superblock_test_md_interleaved 00:17:30.233 ************************************ 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88434 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88434 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88434 ']' 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.233 02:33:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.233 [2024-11-28 02:33:03.696092] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:30.233 [2024-11-28 02:33:03.696283] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88434 ] 00:17:30.233 [2024-11-28 02:33:03.870192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.491 [2024-11-28 02:33:03.974457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.491 [2024-11-28 02:33:04.156566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.491 [2024-11-28 02:33:04.156695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.059 malloc1 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.059 [2024-11-28 02:33:04.564014] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.059 [2024-11-28 02:33:04.564114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.059 [2024-11-28 02:33:04.564170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:31.059 [2024-11-28 02:33:04.564199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.059 [2024-11-28 02:33:04.565950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.059 [2024-11-28 02:33:04.566012] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.059 pt1 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.059 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.060 malloc2 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.060 [2024-11-28 02:33:04.621244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.060 [2024-11-28 02:33:04.621358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.060 [2024-11-28 02:33:04.621395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:31.060 [2024-11-28 02:33:04.621421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.060 [2024-11-28 02:33:04.623175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.060 [2024-11-28 02:33:04.623253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.060 pt2 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.060 [2024-11-28 02:33:04.633254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.060 [2024-11-28 02:33:04.635040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.060 [2024-11-28 02:33:04.635275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:31.060 [2024-11-28 02:33:04.635318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:31.060 [2024-11-28 02:33:04.635407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:31.060 [2024-11-28 02:33:04.635504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:31.060 [2024-11-28 02:33:04.635545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:31.060 [2024-11-28 02:33:04.635647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.060 "name": "raid_bdev1", 00:17:31.060 "uuid": "f6b20631-245b-49d8-a8b0-87d53a4df946", 00:17:31.060 "strip_size_kb": 0, 00:17:31.060 "state": "online", 00:17:31.060 "raid_level": "raid1", 00:17:31.060 "superblock": true, 00:17:31.060 "num_base_bdevs": 2, 00:17:31.060 "num_base_bdevs_discovered": 2, 00:17:31.060 "num_base_bdevs_operational": 2, 00:17:31.060 "base_bdevs_list": [ 00:17:31.060 { 00:17:31.060 "name": "pt1", 00:17:31.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.060 "is_configured": true, 00:17:31.060 "data_offset": 256, 00:17:31.060 "data_size": 7936 00:17:31.060 }, 00:17:31.060 { 00:17:31.060 "name": "pt2", 00:17:31.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.060 "is_configured": true, 00:17:31.060 "data_offset": 256, 00:17:31.060 "data_size": 7936 00:17:31.060 } 00:17:31.060 ] 00:17:31.060 }' 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.060 02:33:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.626 [2024-11-28 02:33:05.096709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.626 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:31.626 "name": "raid_bdev1", 00:17:31.626 "aliases": [ 00:17:31.626 "f6b20631-245b-49d8-a8b0-87d53a4df946" 00:17:31.626 ], 00:17:31.626 "product_name": "Raid Volume", 00:17:31.626 "block_size": 4128, 00:17:31.626 "num_blocks": 7936, 00:17:31.626 "uuid": "f6b20631-245b-49d8-a8b0-87d53a4df946", 00:17:31.626 "md_size": 32, 00:17:31.626 "md_interleave": true, 00:17:31.626 "dif_type": 0, 00:17:31.626 "assigned_rate_limits": { 00:17:31.626 "rw_ios_per_sec": 0, 00:17:31.626 "rw_mbytes_per_sec": 0, 00:17:31.626 "r_mbytes_per_sec": 0, 00:17:31.626 "w_mbytes_per_sec": 0 00:17:31.626 }, 00:17:31.626 "claimed": false, 00:17:31.626 "zoned": false, 00:17:31.626 "supported_io_types": { 00:17:31.626 "read": true, 00:17:31.626 "write": true, 00:17:31.626 "unmap": false, 00:17:31.626 "flush": false, 00:17:31.626 "reset": true, 00:17:31.626 "nvme_admin": false, 00:17:31.626 "nvme_io": false, 00:17:31.626 "nvme_io_md": false, 00:17:31.626 "write_zeroes": true, 00:17:31.626 "zcopy": false, 00:17:31.626 "get_zone_info": false, 00:17:31.626 "zone_management": false, 00:17:31.626 "zone_append": false, 00:17:31.626 "compare": false, 00:17:31.626 "compare_and_write": false, 00:17:31.626 "abort": false, 00:17:31.626 "seek_hole": false, 00:17:31.626 "seek_data": false, 00:17:31.626 "copy": false, 00:17:31.626 "nvme_iov_md": false 00:17:31.626 }, 00:17:31.626 "memory_domains": [ 00:17:31.626 { 00:17:31.626 "dma_device_id": "system", 00:17:31.626 "dma_device_type": 1 00:17:31.626 }, 00:17:31.626 { 00:17:31.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.626 "dma_device_type": 2 00:17:31.626 }, 00:17:31.626 { 00:17:31.626 "dma_device_id": "system", 00:17:31.626 "dma_device_type": 1 00:17:31.626 }, 00:17:31.626 { 00:17:31.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.626 "dma_device_type": 2 00:17:31.626 } 00:17:31.626 ], 00:17:31.626 "driver_specific": { 00:17:31.626 "raid": { 00:17:31.626 "uuid": "f6b20631-245b-49d8-a8b0-87d53a4df946", 00:17:31.626 "strip_size_kb": 0, 00:17:31.626 "state": "online", 00:17:31.627 "raid_level": "raid1", 00:17:31.627 "superblock": true, 00:17:31.627 "num_base_bdevs": 2, 00:17:31.627 "num_base_bdevs_discovered": 2, 00:17:31.627 "num_base_bdevs_operational": 2, 00:17:31.627 "base_bdevs_list": [ 00:17:31.627 { 00:17:31.627 "name": "pt1", 00:17:31.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.627 "is_configured": true, 00:17:31.627 "data_offset": 256, 00:17:31.627 "data_size": 7936 00:17:31.627 }, 00:17:31.627 { 00:17:31.627 "name": "pt2", 00:17:31.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.627 "is_configured": true, 00:17:31.627 "data_offset": 256, 00:17:31.627 "data_size": 7936 00:17:31.627 } 00:17:31.627 ] 00:17:31.627 } 00:17:31.627 } 00:17:31.627 }' 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:31.627 pt2' 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.627 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.885 [2024-11-28 02:33:05.332273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f6b20631-245b-49d8-a8b0-87d53a4df946 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f6b20631-245b-49d8-a8b0-87d53a4df946 ']' 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.885 [2024-11-28 02:33:05.375950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.885 [2024-11-28 02:33:05.375968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.885 [2024-11-28 02:33:05.376035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.885 [2024-11-28 02:33:05.376084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.885 [2024-11-28 02:33:05.376095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:31.885 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.886 [2024-11-28 02:33:05.507743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:31.886 [2024-11-28 02:33:05.509567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:31.886 [2024-11-28 02:33:05.509646] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:31.886 [2024-11-28 02:33:05.509695] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:31.886 [2024-11-28 02:33:05.509710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.886 [2024-11-28 02:33:05.509720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:31.886 request: 00:17:31.886 { 00:17:31.886 "name": "raid_bdev1", 00:17:31.886 "raid_level": "raid1", 00:17:31.886 "base_bdevs": [ 00:17:31.886 "malloc1", 00:17:31.886 "malloc2" 00:17:31.886 ], 00:17:31.886 "superblock": false, 00:17:31.886 "method": "bdev_raid_create", 00:17:31.886 "req_id": 1 00:17:31.886 } 00:17:31.886 Got JSON-RPC error response 00:17:31.886 response: 00:17:31.886 { 00:17:31.886 "code": -17, 00:17:31.886 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:31.886 } 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.886 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.145 [2024-11-28 02:33:05.571619] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:32.145 [2024-11-28 02:33:05.571666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.145 [2024-11-28 02:33:05.571681] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:32.145 [2024-11-28 02:33:05.571691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.145 [2024-11-28 02:33:05.573542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.145 [2024-11-28 02:33:05.573580] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:32.145 [2024-11-28 02:33:05.573623] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:32.145 [2024-11-28 02:33:05.573678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:32.145 pt1 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.145 "name": "raid_bdev1", 00:17:32.145 "uuid": "f6b20631-245b-49d8-a8b0-87d53a4df946", 00:17:32.145 "strip_size_kb": 0, 00:17:32.145 "state": "configuring", 00:17:32.145 "raid_level": "raid1", 00:17:32.145 "superblock": true, 00:17:32.145 "num_base_bdevs": 2, 00:17:32.145 "num_base_bdevs_discovered": 1, 00:17:32.145 "num_base_bdevs_operational": 2, 00:17:32.145 "base_bdevs_list": [ 00:17:32.145 { 00:17:32.145 "name": "pt1", 00:17:32.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.145 "is_configured": true, 00:17:32.145 "data_offset": 256, 00:17:32.145 "data_size": 7936 00:17:32.145 }, 00:17:32.145 { 00:17:32.145 "name": null, 00:17:32.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.145 "is_configured": false, 00:17:32.145 "data_offset": 256, 00:17:32.145 "data_size": 7936 00:17:32.145 } 00:17:32.145 ] 00:17:32.145 }' 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.145 02:33:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.404 [2024-11-28 02:33:06.014856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:32.404 [2024-11-28 02:33:06.014932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.404 [2024-11-28 02:33:06.014953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:32.404 [2024-11-28 02:33:06.014964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.404 [2024-11-28 02:33:06.015113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.404 [2024-11-28 02:33:06.015134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:32.404 [2024-11-28 02:33:06.015180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:32.404 [2024-11-28 02:33:06.015204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.404 [2024-11-28 02:33:06.015300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:32.404 [2024-11-28 02:33:06.015321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:32.404 [2024-11-28 02:33:06.015395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:32.404 [2024-11-28 02:33:06.015463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:32.404 [2024-11-28 02:33:06.015472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:32.404 [2024-11-28 02:33:06.015533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.404 pt2 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.404 "name": "raid_bdev1", 00:17:32.404 "uuid": "f6b20631-245b-49d8-a8b0-87d53a4df946", 00:17:32.404 "strip_size_kb": 0, 00:17:32.404 "state": "online", 00:17:32.404 "raid_level": "raid1", 00:17:32.404 "superblock": true, 00:17:32.404 "num_base_bdevs": 2, 00:17:32.404 "num_base_bdevs_discovered": 2, 00:17:32.404 "num_base_bdevs_operational": 2, 00:17:32.404 "base_bdevs_list": [ 00:17:32.404 { 00:17:32.404 "name": "pt1", 00:17:32.404 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.404 "is_configured": true, 00:17:32.404 "data_offset": 256, 00:17:32.404 "data_size": 7936 00:17:32.404 }, 00:17:32.404 { 00:17:32.404 "name": "pt2", 00:17:32.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.404 "is_configured": true, 00:17:32.404 "data_offset": 256, 00:17:32.404 "data_size": 7936 00:17:32.404 } 00:17:32.404 ] 00:17:32.404 }' 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.404 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.973 [2024-11-28 02:33:06.398424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:32.973 "name": "raid_bdev1", 00:17:32.973 "aliases": [ 00:17:32.973 "f6b20631-245b-49d8-a8b0-87d53a4df946" 00:17:32.973 ], 00:17:32.973 "product_name": "Raid Volume", 00:17:32.973 "block_size": 4128, 00:17:32.973 "num_blocks": 7936, 00:17:32.973 "uuid": "f6b20631-245b-49d8-a8b0-87d53a4df946", 00:17:32.973 "md_size": 32, 00:17:32.973 "md_interleave": true, 00:17:32.973 "dif_type": 0, 00:17:32.973 "assigned_rate_limits": { 00:17:32.973 "rw_ios_per_sec": 0, 00:17:32.973 "rw_mbytes_per_sec": 0, 00:17:32.973 "r_mbytes_per_sec": 0, 00:17:32.973 "w_mbytes_per_sec": 0 00:17:32.973 }, 00:17:32.973 "claimed": false, 00:17:32.973 "zoned": false, 00:17:32.973 "supported_io_types": { 00:17:32.973 "read": true, 00:17:32.973 "write": true, 00:17:32.973 "unmap": false, 00:17:32.973 "flush": false, 00:17:32.973 "reset": true, 00:17:32.973 "nvme_admin": false, 00:17:32.973 "nvme_io": false, 00:17:32.973 "nvme_io_md": false, 00:17:32.973 "write_zeroes": true, 00:17:32.973 "zcopy": false, 00:17:32.973 "get_zone_info": false, 00:17:32.973 "zone_management": false, 00:17:32.973 "zone_append": false, 00:17:32.973 "compare": false, 00:17:32.973 "compare_and_write": false, 00:17:32.973 "abort": false, 00:17:32.973 "seek_hole": false, 00:17:32.973 "seek_data": false, 00:17:32.973 "copy": false, 00:17:32.973 "nvme_iov_md": false 00:17:32.973 }, 00:17:32.973 "memory_domains": [ 00:17:32.973 { 00:17:32.973 "dma_device_id": "system", 00:17:32.973 "dma_device_type": 1 00:17:32.973 }, 00:17:32.973 { 00:17:32.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.973 "dma_device_type": 2 00:17:32.973 }, 00:17:32.973 { 00:17:32.973 "dma_device_id": "system", 00:17:32.973 "dma_device_type": 1 00:17:32.973 }, 00:17:32.973 { 00:17:32.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.973 "dma_device_type": 2 00:17:32.973 } 00:17:32.973 ], 00:17:32.973 "driver_specific": { 00:17:32.973 "raid": { 00:17:32.973 "uuid": "f6b20631-245b-49d8-a8b0-87d53a4df946", 00:17:32.973 "strip_size_kb": 0, 00:17:32.973 "state": "online", 00:17:32.973 "raid_level": "raid1", 00:17:32.973 "superblock": true, 00:17:32.973 "num_base_bdevs": 2, 00:17:32.973 "num_base_bdevs_discovered": 2, 00:17:32.973 "num_base_bdevs_operational": 2, 00:17:32.973 "base_bdevs_list": [ 00:17:32.973 { 00:17:32.973 "name": "pt1", 00:17:32.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.973 "is_configured": true, 00:17:32.973 "data_offset": 256, 00:17:32.973 "data_size": 7936 00:17:32.973 }, 00:17:32.973 { 00:17:32.973 "name": "pt2", 00:17:32.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.973 "is_configured": true, 00:17:32.973 "data_offset": 256, 00:17:32.973 "data_size": 7936 00:17:32.973 } 00:17:32.973 ] 00:17:32.973 } 00:17:32.973 } 00:17:32.973 }' 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:32.973 pt2' 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:32.973 [2024-11-28 02:33:06.634037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.973 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f6b20631-245b-49d8-a8b0-87d53a4df946 '!=' f6b20631-245b-49d8-a8b0-87d53a4df946 ']' 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.232 [2024-11-28 02:33:06.685696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.232 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.233 "name": "raid_bdev1", 00:17:33.233 "uuid": "f6b20631-245b-49d8-a8b0-87d53a4df946", 00:17:33.233 "strip_size_kb": 0, 00:17:33.233 "state": "online", 00:17:33.233 "raid_level": "raid1", 00:17:33.233 "superblock": true, 00:17:33.233 "num_base_bdevs": 2, 00:17:33.233 "num_base_bdevs_discovered": 1, 00:17:33.233 "num_base_bdevs_operational": 1, 00:17:33.233 "base_bdevs_list": [ 00:17:33.233 { 00:17:33.233 "name": null, 00:17:33.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.233 "is_configured": false, 00:17:33.233 "data_offset": 0, 00:17:33.233 "data_size": 7936 00:17:33.233 }, 00:17:33.233 { 00:17:33.233 "name": "pt2", 00:17:33.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.233 "is_configured": true, 00:17:33.233 "data_offset": 256, 00:17:33.233 "data_size": 7936 00:17:33.233 } 00:17:33.233 ] 00:17:33.233 }' 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.233 02:33:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.491 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:33.491 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.492 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.492 [2024-11-28 02:33:07.124914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.492 [2024-11-28 02:33:07.124947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.492 [2024-11-28 02:33:07.125011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.492 [2024-11-28 02:33:07.125055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.492 [2024-11-28 02:33:07.125066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:33.492 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.492 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.492 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.492 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.492 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:33.492 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.751 [2024-11-28 02:33:07.200791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.751 [2024-11-28 02:33:07.200848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.751 [2024-11-28 02:33:07.200881] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:33.751 [2024-11-28 02:33:07.200892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.751 [2024-11-28 02:33:07.202738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.751 [2024-11-28 02:33:07.202777] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.751 [2024-11-28 02:33:07.202825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:33.751 [2024-11-28 02:33:07.202877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.751 [2024-11-28 02:33:07.202958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:33.751 [2024-11-28 02:33:07.202989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:33.751 [2024-11-28 02:33:07.203078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:33.751 [2024-11-28 02:33:07.203151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:33.751 [2024-11-28 02:33:07.203162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:33.751 [2024-11-28 02:33:07.203229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.751 pt2 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.751 "name": "raid_bdev1", 00:17:33.751 "uuid": "f6b20631-245b-49d8-a8b0-87d53a4df946", 00:17:33.751 "strip_size_kb": 0, 00:17:33.751 "state": "online", 00:17:33.751 "raid_level": "raid1", 00:17:33.751 "superblock": true, 00:17:33.751 "num_base_bdevs": 2, 00:17:33.751 "num_base_bdevs_discovered": 1, 00:17:33.751 "num_base_bdevs_operational": 1, 00:17:33.751 "base_bdevs_list": [ 00:17:33.751 { 00:17:33.751 "name": null, 00:17:33.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.751 "is_configured": false, 00:17:33.751 "data_offset": 256, 00:17:33.751 "data_size": 7936 00:17:33.751 }, 00:17:33.751 { 00:17:33.751 "name": "pt2", 00:17:33.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.751 "is_configured": true, 00:17:33.751 "data_offset": 256, 00:17:33.751 "data_size": 7936 00:17:33.751 } 00:17:33.751 ] 00:17:33.751 }' 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.751 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.011 [2024-11-28 02:33:07.624024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.011 [2024-11-28 02:33:07.624052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.011 [2024-11-28 02:33:07.624106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.011 [2024-11-28 02:33:07.624158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.011 [2024-11-28 02:33:07.624168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.011 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.011 [2024-11-28 02:33:07.683958] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:34.011 [2024-11-28 02:33:07.684003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.011 [2024-11-28 02:33:07.684037] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:34.011 [2024-11-28 02:33:07.684045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.011 [2024-11-28 02:33:07.685903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.011 [2024-11-28 02:33:07.685961] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:34.011 [2024-11-28 02:33:07.686006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:34.011 [2024-11-28 02:33:07.686048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:34.011 [2024-11-28 02:33:07.686138] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:34.011 [2024-11-28 02:33:07.686156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.011 [2024-11-28 02:33:07.686172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:34.011 [2024-11-28 02:33:07.686242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.011 [2024-11-28 02:33:07.686312] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:34.011 [2024-11-28 02:33:07.686328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:34.011 [2024-11-28 02:33:07.686394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:34.011 [2024-11-28 02:33:07.686457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:34.012 [2024-11-28 02:33:07.686470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:34.012 [2024-11-28 02:33:07.686536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.271 pt1 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.271 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.271 "name": "raid_bdev1", 00:17:34.271 "uuid": "f6b20631-245b-49d8-a8b0-87d53a4df946", 00:17:34.271 "strip_size_kb": 0, 00:17:34.271 "state": "online", 00:17:34.271 "raid_level": "raid1", 00:17:34.271 "superblock": true, 00:17:34.271 "num_base_bdevs": 2, 00:17:34.271 "num_base_bdevs_discovered": 1, 00:17:34.271 "num_base_bdevs_operational": 1, 00:17:34.271 "base_bdevs_list": [ 00:17:34.271 { 00:17:34.271 "name": null, 00:17:34.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.271 "is_configured": false, 00:17:34.271 "data_offset": 256, 00:17:34.271 "data_size": 7936 00:17:34.271 }, 00:17:34.271 { 00:17:34.271 "name": "pt2", 00:17:34.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.272 "is_configured": true, 00:17:34.272 "data_offset": 256, 00:17:34.272 "data_size": 7936 00:17:34.272 } 00:17:34.272 ] 00:17:34.272 }' 00:17:34.272 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.272 02:33:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.530 [2024-11-28 02:33:08.175284] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.530 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.531 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f6b20631-245b-49d8-a8b0-87d53a4df946 '!=' f6b20631-245b-49d8-a8b0-87d53a4df946 ']' 00:17:34.531 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88434 00:17:34.531 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88434 ']' 00:17:34.531 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88434 00:17:34.531 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:34.531 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.531 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88434 00:17:34.789 killing process with pid 88434 00:17:34.789 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.789 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.789 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88434' 00:17:34.789 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88434 00:17:34.789 [2024-11-28 02:33:08.239651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.789 [2024-11-28 02:33:08.239723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.789 [2024-11-28 02:33:08.239764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.789 [2024-11-28 02:33:08.239776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:34.789 02:33:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88434 00:17:34.789 [2024-11-28 02:33:08.432784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:36.171 02:33:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:36.171 00:17:36.171 real 0m5.869s 00:17:36.171 user 0m8.947s 00:17:36.171 sys 0m1.042s 00:17:36.171 02:33:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.171 02:33:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.171 ************************************ 00:17:36.171 END TEST raid_superblock_test_md_interleaved 00:17:36.171 ************************************ 00:17:36.171 02:33:09 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:36.171 02:33:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:36.171 02:33:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.171 02:33:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:36.171 ************************************ 00:17:36.171 START TEST raid_rebuild_test_sb_md_interleaved 00:17:36.171 ************************************ 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:36.171 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88756 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88756 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88756 ']' 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.172 02:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.172 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:36.172 Zero copy mechanism will not be used. 00:17:36.172 [2024-11-28 02:33:09.650012] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:36.172 [2024-11-28 02:33:09.650132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88756 ] 00:17:36.172 [2024-11-28 02:33:09.824304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.431 [2024-11-28 02:33:09.924396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.691 [2024-11-28 02:33:10.111057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.691 [2024-11-28 02:33:10.111115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.951 BaseBdev1_malloc 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.951 [2024-11-28 02:33:10.504730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:36.951 [2024-11-28 02:33:10.504792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.951 [2024-11-28 02:33:10.504832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:36.951 [2024-11-28 02:33:10.504843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.951 [2024-11-28 02:33:10.506674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.951 [2024-11-28 02:33:10.506712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:36.951 BaseBdev1 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.951 BaseBdev2_malloc 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.951 [2024-11-28 02:33:10.557844] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:36.951 [2024-11-28 02:33:10.557902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.951 [2024-11-28 02:33:10.557947] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:36.951 [2024-11-28 02:33:10.557960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.951 [2024-11-28 02:33:10.559737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.951 [2024-11-28 02:33:10.559773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:36.951 BaseBdev2 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.951 spare_malloc 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.951 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.211 spare_delay 00:17:37.211 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.211 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.211 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.211 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.211 [2024-11-28 02:33:10.636825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.211 [2024-11-28 02:33:10.636883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.211 [2024-11-28 02:33:10.636905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:37.211 [2024-11-28 02:33:10.636915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.211 [2024-11-28 02:33:10.638673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.211 [2024-11-28 02:33:10.638710] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.211 spare 00:17:37.211 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.211 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:37.211 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.212 [2024-11-28 02:33:10.648843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.212 [2024-11-28 02:33:10.650616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.212 [2024-11-28 02:33:10.650817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:37.212 [2024-11-28 02:33:10.650839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:37.212 [2024-11-28 02:33:10.650913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:37.212 [2024-11-28 02:33:10.650997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:37.212 [2024-11-28 02:33:10.651005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:37.212 [2024-11-28 02:33:10.651070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.212 "name": "raid_bdev1", 00:17:37.212 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:37.212 "strip_size_kb": 0, 00:17:37.212 "state": "online", 00:17:37.212 "raid_level": "raid1", 00:17:37.212 "superblock": true, 00:17:37.212 "num_base_bdevs": 2, 00:17:37.212 "num_base_bdevs_discovered": 2, 00:17:37.212 "num_base_bdevs_operational": 2, 00:17:37.212 "base_bdevs_list": [ 00:17:37.212 { 00:17:37.212 "name": "BaseBdev1", 00:17:37.212 "uuid": "6a1ef9fa-5a18-517c-936a-9eeab35f98ef", 00:17:37.212 "is_configured": true, 00:17:37.212 "data_offset": 256, 00:17:37.212 "data_size": 7936 00:17:37.212 }, 00:17:37.212 { 00:17:37.212 "name": "BaseBdev2", 00:17:37.212 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:37.212 "is_configured": true, 00:17:37.212 "data_offset": 256, 00:17:37.212 "data_size": 7936 00:17:37.212 } 00:17:37.212 ] 00:17:37.212 }' 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.212 02:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.472 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.472 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.472 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.472 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:37.472 [2024-11-28 02:33:11.124315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.472 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.733 [2024-11-28 02:33:11.223810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.733 "name": "raid_bdev1", 00:17:37.733 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:37.733 "strip_size_kb": 0, 00:17:37.733 "state": "online", 00:17:37.733 "raid_level": "raid1", 00:17:37.733 "superblock": true, 00:17:37.733 "num_base_bdevs": 2, 00:17:37.733 "num_base_bdevs_discovered": 1, 00:17:37.733 "num_base_bdevs_operational": 1, 00:17:37.733 "base_bdevs_list": [ 00:17:37.733 { 00:17:37.733 "name": null, 00:17:37.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.733 "is_configured": false, 00:17:37.733 "data_offset": 0, 00:17:37.733 "data_size": 7936 00:17:37.733 }, 00:17:37.733 { 00:17:37.733 "name": "BaseBdev2", 00:17:37.733 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:37.733 "is_configured": true, 00:17:37.733 "data_offset": 256, 00:17:37.733 "data_size": 7936 00:17:37.733 } 00:17:37.733 ] 00:17:37.733 }' 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.733 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.993 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:37.993 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.993 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.993 [2024-11-28 02:33:11.635113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.993 [2024-11-28 02:33:11.651273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:37.993 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.993 02:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:37.993 [2024-11-28 02:33:11.653149] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.375 "name": "raid_bdev1", 00:17:39.375 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:39.375 "strip_size_kb": 0, 00:17:39.375 "state": "online", 00:17:39.375 "raid_level": "raid1", 00:17:39.375 "superblock": true, 00:17:39.375 "num_base_bdevs": 2, 00:17:39.375 "num_base_bdevs_discovered": 2, 00:17:39.375 "num_base_bdevs_operational": 2, 00:17:39.375 "process": { 00:17:39.375 "type": "rebuild", 00:17:39.375 "target": "spare", 00:17:39.375 "progress": { 00:17:39.375 "blocks": 2560, 00:17:39.375 "percent": 32 00:17:39.375 } 00:17:39.375 }, 00:17:39.375 "base_bdevs_list": [ 00:17:39.375 { 00:17:39.375 "name": "spare", 00:17:39.375 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:39.375 "is_configured": true, 00:17:39.375 "data_offset": 256, 00:17:39.375 "data_size": 7936 00:17:39.375 }, 00:17:39.375 { 00:17:39.375 "name": "BaseBdev2", 00:17:39.375 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:39.375 "is_configured": true, 00:17:39.375 "data_offset": 256, 00:17:39.375 "data_size": 7936 00:17:39.375 } 00:17:39.375 ] 00:17:39.375 }' 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.375 [2024-11-28 02:33:12.792784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.375 [2024-11-28 02:33:12.857680] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:39.375 [2024-11-28 02:33:12.857756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.375 [2024-11-28 02:33:12.857770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.375 [2024-11-28 02:33:12.857782] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.375 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.376 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.376 "name": "raid_bdev1", 00:17:39.376 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:39.376 "strip_size_kb": 0, 00:17:39.376 "state": "online", 00:17:39.376 "raid_level": "raid1", 00:17:39.376 "superblock": true, 00:17:39.376 "num_base_bdevs": 2, 00:17:39.376 "num_base_bdevs_discovered": 1, 00:17:39.376 "num_base_bdevs_operational": 1, 00:17:39.376 "base_bdevs_list": [ 00:17:39.376 { 00:17:39.376 "name": null, 00:17:39.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.376 "is_configured": false, 00:17:39.376 "data_offset": 0, 00:17:39.376 "data_size": 7936 00:17:39.376 }, 00:17:39.376 { 00:17:39.376 "name": "BaseBdev2", 00:17:39.376 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:39.376 "is_configured": true, 00:17:39.376 "data_offset": 256, 00:17:39.376 "data_size": 7936 00:17:39.376 } 00:17:39.376 ] 00:17:39.376 }' 00:17:39.376 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.376 02:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.945 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.946 "name": "raid_bdev1", 00:17:39.946 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:39.946 "strip_size_kb": 0, 00:17:39.946 "state": "online", 00:17:39.946 "raid_level": "raid1", 00:17:39.946 "superblock": true, 00:17:39.946 "num_base_bdevs": 2, 00:17:39.946 "num_base_bdevs_discovered": 1, 00:17:39.946 "num_base_bdevs_operational": 1, 00:17:39.946 "base_bdevs_list": [ 00:17:39.946 { 00:17:39.946 "name": null, 00:17:39.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.946 "is_configured": false, 00:17:39.946 "data_offset": 0, 00:17:39.946 "data_size": 7936 00:17:39.946 }, 00:17:39.946 { 00:17:39.946 "name": "BaseBdev2", 00:17:39.946 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:39.946 "is_configured": true, 00:17:39.946 "data_offset": 256, 00:17:39.946 "data_size": 7936 00:17:39.946 } 00:17:39.946 ] 00:17:39.946 }' 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.946 [2024-11-28 02:33:13.467928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.946 [2024-11-28 02:33:13.483025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.946 02:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:39.946 [2024-11-28 02:33:13.484763] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.886 "name": "raid_bdev1", 00:17:40.886 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:40.886 "strip_size_kb": 0, 00:17:40.886 "state": "online", 00:17:40.886 "raid_level": "raid1", 00:17:40.886 "superblock": true, 00:17:40.886 "num_base_bdevs": 2, 00:17:40.886 "num_base_bdevs_discovered": 2, 00:17:40.886 "num_base_bdevs_operational": 2, 00:17:40.886 "process": { 00:17:40.886 "type": "rebuild", 00:17:40.886 "target": "spare", 00:17:40.886 "progress": { 00:17:40.886 "blocks": 2560, 00:17:40.886 "percent": 32 00:17:40.886 } 00:17:40.886 }, 00:17:40.886 "base_bdevs_list": [ 00:17:40.886 { 00:17:40.886 "name": "spare", 00:17:40.886 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:40.886 "is_configured": true, 00:17:40.886 "data_offset": 256, 00:17:40.886 "data_size": 7936 00:17:40.886 }, 00:17:40.886 { 00:17:40.886 "name": "BaseBdev2", 00:17:40.886 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:40.886 "is_configured": true, 00:17:40.886 "data_offset": 256, 00:17:40.886 "data_size": 7936 00:17:40.886 } 00:17:40.886 ] 00:17:40.886 }' 00:17:40.886 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:41.148 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=724 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.148 "name": "raid_bdev1", 00:17:41.148 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:41.148 "strip_size_kb": 0, 00:17:41.148 "state": "online", 00:17:41.148 "raid_level": "raid1", 00:17:41.148 "superblock": true, 00:17:41.148 "num_base_bdevs": 2, 00:17:41.148 "num_base_bdevs_discovered": 2, 00:17:41.148 "num_base_bdevs_operational": 2, 00:17:41.148 "process": { 00:17:41.148 "type": "rebuild", 00:17:41.148 "target": "spare", 00:17:41.148 "progress": { 00:17:41.148 "blocks": 2816, 00:17:41.148 "percent": 35 00:17:41.148 } 00:17:41.148 }, 00:17:41.148 "base_bdevs_list": [ 00:17:41.148 { 00:17:41.148 "name": "spare", 00:17:41.148 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:41.148 "is_configured": true, 00:17:41.148 "data_offset": 256, 00:17:41.148 "data_size": 7936 00:17:41.148 }, 00:17:41.148 { 00:17:41.148 "name": "BaseBdev2", 00:17:41.148 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:41.148 "is_configured": true, 00:17:41.148 "data_offset": 256, 00:17:41.148 "data_size": 7936 00:17:41.148 } 00:17:41.148 ] 00:17:41.148 }' 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.148 02:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.528 "name": "raid_bdev1", 00:17:42.528 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:42.528 "strip_size_kb": 0, 00:17:42.528 "state": "online", 00:17:42.528 "raid_level": "raid1", 00:17:42.528 "superblock": true, 00:17:42.528 "num_base_bdevs": 2, 00:17:42.528 "num_base_bdevs_discovered": 2, 00:17:42.528 "num_base_bdevs_operational": 2, 00:17:42.528 "process": { 00:17:42.528 "type": "rebuild", 00:17:42.528 "target": "spare", 00:17:42.528 "progress": { 00:17:42.528 "blocks": 5632, 00:17:42.528 "percent": 70 00:17:42.528 } 00:17:42.528 }, 00:17:42.528 "base_bdevs_list": [ 00:17:42.528 { 00:17:42.528 "name": "spare", 00:17:42.528 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:42.528 "is_configured": true, 00:17:42.528 "data_offset": 256, 00:17:42.528 "data_size": 7936 00:17:42.528 }, 00:17:42.528 { 00:17:42.528 "name": "BaseBdev2", 00:17:42.528 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:42.528 "is_configured": true, 00:17:42.528 "data_offset": 256, 00:17:42.528 "data_size": 7936 00:17:42.528 } 00:17:42.528 ] 00:17:42.528 }' 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.528 02:33:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.098 [2024-11-28 02:33:16.595958] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:43.098 [2024-11-28 02:33:16.596051] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:43.098 [2024-11-28 02:33:16.596152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.357 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.357 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.357 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.357 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.358 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.358 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.358 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.358 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.358 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.358 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.358 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.358 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.358 "name": "raid_bdev1", 00:17:43.358 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:43.358 "strip_size_kb": 0, 00:17:43.358 "state": "online", 00:17:43.358 "raid_level": "raid1", 00:17:43.358 "superblock": true, 00:17:43.358 "num_base_bdevs": 2, 00:17:43.358 "num_base_bdevs_discovered": 2, 00:17:43.358 "num_base_bdevs_operational": 2, 00:17:43.358 "base_bdevs_list": [ 00:17:43.358 { 00:17:43.358 "name": "spare", 00:17:43.358 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:43.358 "is_configured": true, 00:17:43.358 "data_offset": 256, 00:17:43.358 "data_size": 7936 00:17:43.358 }, 00:17:43.358 { 00:17:43.358 "name": "BaseBdev2", 00:17:43.358 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:43.358 "is_configured": true, 00:17:43.358 "data_offset": 256, 00:17:43.358 "data_size": 7936 00:17:43.358 } 00:17:43.358 ] 00:17:43.358 }' 00:17:43.358 02:33:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.358 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:43.358 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.616 "name": "raid_bdev1", 00:17:43.616 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:43.616 "strip_size_kb": 0, 00:17:43.616 "state": "online", 00:17:43.616 "raid_level": "raid1", 00:17:43.616 "superblock": true, 00:17:43.616 "num_base_bdevs": 2, 00:17:43.616 "num_base_bdevs_discovered": 2, 00:17:43.616 "num_base_bdevs_operational": 2, 00:17:43.616 "base_bdevs_list": [ 00:17:43.616 { 00:17:43.616 "name": "spare", 00:17:43.616 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:43.616 "is_configured": true, 00:17:43.616 "data_offset": 256, 00:17:43.616 "data_size": 7936 00:17:43.616 }, 00:17:43.616 { 00:17:43.616 "name": "BaseBdev2", 00:17:43.616 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:43.616 "is_configured": true, 00:17:43.616 "data_offset": 256, 00:17:43.616 "data_size": 7936 00:17:43.616 } 00:17:43.616 ] 00:17:43.616 }' 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.616 "name": "raid_bdev1", 00:17:43.616 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:43.616 "strip_size_kb": 0, 00:17:43.616 "state": "online", 00:17:43.616 "raid_level": "raid1", 00:17:43.616 "superblock": true, 00:17:43.616 "num_base_bdevs": 2, 00:17:43.616 "num_base_bdevs_discovered": 2, 00:17:43.616 "num_base_bdevs_operational": 2, 00:17:43.616 "base_bdevs_list": [ 00:17:43.616 { 00:17:43.616 "name": "spare", 00:17:43.616 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:43.616 "is_configured": true, 00:17:43.616 "data_offset": 256, 00:17:43.616 "data_size": 7936 00:17:43.616 }, 00:17:43.616 { 00:17:43.616 "name": "BaseBdev2", 00:17:43.616 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:43.616 "is_configured": true, 00:17:43.616 "data_offset": 256, 00:17:43.616 "data_size": 7936 00:17:43.616 } 00:17:43.616 ] 00:17:43.616 }' 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.616 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.184 [2024-11-28 02:33:17.630876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.184 [2024-11-28 02:33:17.630912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.184 [2024-11-28 02:33:17.631006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.184 [2024-11-28 02:33:17.631074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.184 [2024-11-28 02:33:17.631092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.184 [2024-11-28 02:33:17.682762] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.184 [2024-11-28 02:33:17.682815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.184 [2024-11-28 02:33:17.682839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:44.184 [2024-11-28 02:33:17.682847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.184 [2024-11-28 02:33:17.684749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.184 [2024-11-28 02:33:17.684787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.184 [2024-11-28 02:33:17.684838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:44.184 [2024-11-28 02:33:17.684895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.184 [2024-11-28 02:33:17.685018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.184 spare 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.184 [2024-11-28 02:33:17.784902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:44.184 [2024-11-28 02:33:17.784938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:44.184 [2024-11-28 02:33:17.785037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:44.184 [2024-11-28 02:33:17.785116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:44.184 [2024-11-28 02:33:17.785125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:44.184 [2024-11-28 02:33:17.785198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.184 "name": "raid_bdev1", 00:17:44.184 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:44.184 "strip_size_kb": 0, 00:17:44.184 "state": "online", 00:17:44.184 "raid_level": "raid1", 00:17:44.184 "superblock": true, 00:17:44.184 "num_base_bdevs": 2, 00:17:44.184 "num_base_bdevs_discovered": 2, 00:17:44.184 "num_base_bdevs_operational": 2, 00:17:44.184 "base_bdevs_list": [ 00:17:44.184 { 00:17:44.184 "name": "spare", 00:17:44.184 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:44.184 "is_configured": true, 00:17:44.184 "data_offset": 256, 00:17:44.184 "data_size": 7936 00:17:44.184 }, 00:17:44.184 { 00:17:44.184 "name": "BaseBdev2", 00:17:44.184 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:44.184 "is_configured": true, 00:17:44.184 "data_offset": 256, 00:17:44.184 "data_size": 7936 00:17:44.184 } 00:17:44.184 ] 00:17:44.184 }' 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.184 02:33:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.756 "name": "raid_bdev1", 00:17:44.756 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:44.756 "strip_size_kb": 0, 00:17:44.756 "state": "online", 00:17:44.756 "raid_level": "raid1", 00:17:44.756 "superblock": true, 00:17:44.756 "num_base_bdevs": 2, 00:17:44.756 "num_base_bdevs_discovered": 2, 00:17:44.756 "num_base_bdevs_operational": 2, 00:17:44.756 "base_bdevs_list": [ 00:17:44.756 { 00:17:44.756 "name": "spare", 00:17:44.756 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:44.756 "is_configured": true, 00:17:44.756 "data_offset": 256, 00:17:44.756 "data_size": 7936 00:17:44.756 }, 00:17:44.756 { 00:17:44.756 "name": "BaseBdev2", 00:17:44.756 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:44.756 "is_configured": true, 00:17:44.756 "data_offset": 256, 00:17:44.756 "data_size": 7936 00:17:44.756 } 00:17:44.756 ] 00:17:44.756 }' 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.756 [2024-11-28 02:33:18.385634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.756 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.017 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.017 "name": "raid_bdev1", 00:17:45.017 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:45.017 "strip_size_kb": 0, 00:17:45.017 "state": "online", 00:17:45.017 "raid_level": "raid1", 00:17:45.017 "superblock": true, 00:17:45.017 "num_base_bdevs": 2, 00:17:45.017 "num_base_bdevs_discovered": 1, 00:17:45.017 "num_base_bdevs_operational": 1, 00:17:45.017 "base_bdevs_list": [ 00:17:45.017 { 00:17:45.017 "name": null, 00:17:45.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.017 "is_configured": false, 00:17:45.017 "data_offset": 0, 00:17:45.017 "data_size": 7936 00:17:45.017 }, 00:17:45.017 { 00:17:45.017 "name": "BaseBdev2", 00:17:45.017 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:45.017 "is_configured": true, 00:17:45.017 "data_offset": 256, 00:17:45.017 "data_size": 7936 00:17:45.017 } 00:17:45.017 ] 00:17:45.017 }' 00:17:45.017 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.017 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.278 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:45.278 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.278 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.278 [2024-11-28 02:33:18.828895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.278 [2024-11-28 02:33:18.829095] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.278 [2024-11-28 02:33:18.829118] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:45.278 [2024-11-28 02:33:18.829155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.278 [2024-11-28 02:33:18.844390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:45.278 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.278 02:33:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:45.278 [2024-11-28 02:33:18.846182] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.219 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.219 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.219 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.219 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.219 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.219 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.219 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.219 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.219 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.219 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.479 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.479 "name": "raid_bdev1", 00:17:46.479 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:46.479 "strip_size_kb": 0, 00:17:46.479 "state": "online", 00:17:46.479 "raid_level": "raid1", 00:17:46.479 "superblock": true, 00:17:46.479 "num_base_bdevs": 2, 00:17:46.479 "num_base_bdevs_discovered": 2, 00:17:46.479 "num_base_bdevs_operational": 2, 00:17:46.479 "process": { 00:17:46.479 "type": "rebuild", 00:17:46.479 "target": "spare", 00:17:46.479 "progress": { 00:17:46.479 "blocks": 2560, 00:17:46.479 "percent": 32 00:17:46.479 } 00:17:46.479 }, 00:17:46.479 "base_bdevs_list": [ 00:17:46.479 { 00:17:46.479 "name": "spare", 00:17:46.479 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:46.479 "is_configured": true, 00:17:46.479 "data_offset": 256, 00:17:46.479 "data_size": 7936 00:17:46.479 }, 00:17:46.479 { 00:17:46.479 "name": "BaseBdev2", 00:17:46.479 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:46.479 "is_configured": true, 00:17:46.479 "data_offset": 256, 00:17:46.479 "data_size": 7936 00:17:46.479 } 00:17:46.479 ] 00:17:46.479 }' 00:17:46.479 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.479 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.479 02:33:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.479 [2024-11-28 02:33:20.014293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.479 [2024-11-28 02:33:20.050679] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:46.479 [2024-11-28 02:33:20.050739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.479 [2024-11-28 02:33:20.050769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.479 [2024-11-28 02:33:20.050778] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.479 "name": "raid_bdev1", 00:17:46.479 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:46.479 "strip_size_kb": 0, 00:17:46.479 "state": "online", 00:17:46.479 "raid_level": "raid1", 00:17:46.479 "superblock": true, 00:17:46.479 "num_base_bdevs": 2, 00:17:46.479 "num_base_bdevs_discovered": 1, 00:17:46.479 "num_base_bdevs_operational": 1, 00:17:46.479 "base_bdevs_list": [ 00:17:46.479 { 00:17:46.479 "name": null, 00:17:46.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.479 "is_configured": false, 00:17:46.479 "data_offset": 0, 00:17:46.479 "data_size": 7936 00:17:46.479 }, 00:17:46.479 { 00:17:46.479 "name": "BaseBdev2", 00:17:46.479 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:46.479 "is_configured": true, 00:17:46.479 "data_offset": 256, 00:17:46.479 "data_size": 7936 00:17:46.479 } 00:17:46.479 ] 00:17:46.479 }' 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.479 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.049 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.049 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.049 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.049 [2024-11-28 02:33:20.579020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.049 [2024-11-28 02:33:20.579099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.049 [2024-11-28 02:33:20.579125] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:47.049 [2024-11-28 02:33:20.579136] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.049 [2024-11-28 02:33:20.579305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.049 [2024-11-28 02:33:20.579325] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.049 [2024-11-28 02:33:20.579370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:47.049 [2024-11-28 02:33:20.579383] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.049 [2024-11-28 02:33:20.579391] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:47.049 [2024-11-28 02:33:20.579411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.049 [2024-11-28 02:33:20.594528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:47.049 spare 00:17:47.049 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.049 02:33:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:47.049 [2024-11-28 02:33:20.596357] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.990 "name": "raid_bdev1", 00:17:47.990 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:47.990 "strip_size_kb": 0, 00:17:47.990 "state": "online", 00:17:47.990 "raid_level": "raid1", 00:17:47.990 "superblock": true, 00:17:47.990 "num_base_bdevs": 2, 00:17:47.990 "num_base_bdevs_discovered": 2, 00:17:47.990 "num_base_bdevs_operational": 2, 00:17:47.990 "process": { 00:17:47.990 "type": "rebuild", 00:17:47.990 "target": "spare", 00:17:47.990 "progress": { 00:17:47.990 "blocks": 2560, 00:17:47.990 "percent": 32 00:17:47.990 } 00:17:47.990 }, 00:17:47.990 "base_bdevs_list": [ 00:17:47.990 { 00:17:47.990 "name": "spare", 00:17:47.990 "uuid": "a41f4e27-7371-5ee8-9eb2-e668444a9be4", 00:17:47.990 "is_configured": true, 00:17:47.990 "data_offset": 256, 00:17:47.990 "data_size": 7936 00:17:47.990 }, 00:17:47.990 { 00:17:47.990 "name": "BaseBdev2", 00:17:47.990 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:47.990 "is_configured": true, 00:17:47.990 "data_offset": 256, 00:17:47.990 "data_size": 7936 00:17:47.990 } 00:17:47.990 ] 00:17:47.990 }' 00:17:47.990 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.251 [2024-11-28 02:33:21.727928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.251 [2024-11-28 02:33:21.800757] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.251 [2024-11-28 02:33:21.800823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.251 [2024-11-28 02:33:21.800839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.251 [2024-11-28 02:33:21.800845] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.251 "name": "raid_bdev1", 00:17:48.251 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:48.251 "strip_size_kb": 0, 00:17:48.251 "state": "online", 00:17:48.251 "raid_level": "raid1", 00:17:48.251 "superblock": true, 00:17:48.251 "num_base_bdevs": 2, 00:17:48.251 "num_base_bdevs_discovered": 1, 00:17:48.251 "num_base_bdevs_operational": 1, 00:17:48.251 "base_bdevs_list": [ 00:17:48.251 { 00:17:48.251 "name": null, 00:17:48.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.251 "is_configured": false, 00:17:48.251 "data_offset": 0, 00:17:48.251 "data_size": 7936 00:17:48.251 }, 00:17:48.251 { 00:17:48.251 "name": "BaseBdev2", 00:17:48.251 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:48.251 "is_configured": true, 00:17:48.251 "data_offset": 256, 00:17:48.251 "data_size": 7936 00:17:48.251 } 00:17:48.251 ] 00:17:48.251 }' 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.251 02:33:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.821 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.821 "name": "raid_bdev1", 00:17:48.821 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:48.821 "strip_size_kb": 0, 00:17:48.821 "state": "online", 00:17:48.821 "raid_level": "raid1", 00:17:48.821 "superblock": true, 00:17:48.821 "num_base_bdevs": 2, 00:17:48.821 "num_base_bdevs_discovered": 1, 00:17:48.821 "num_base_bdevs_operational": 1, 00:17:48.821 "base_bdevs_list": [ 00:17:48.821 { 00:17:48.821 "name": null, 00:17:48.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.821 "is_configured": false, 00:17:48.821 "data_offset": 0, 00:17:48.821 "data_size": 7936 00:17:48.821 }, 00:17:48.821 { 00:17:48.821 "name": "BaseBdev2", 00:17:48.821 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:48.822 "is_configured": true, 00:17:48.822 "data_offset": 256, 00:17:48.822 "data_size": 7936 00:17:48.822 } 00:17:48.822 ] 00:17:48.822 }' 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.822 [2024-11-28 02:33:22.433228] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:48.822 [2024-11-28 02:33:22.433282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.822 [2024-11-28 02:33:22.433321] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:48.822 [2024-11-28 02:33:22.433330] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.822 [2024-11-28 02:33:22.433507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.822 [2024-11-28 02:33:22.433539] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.822 [2024-11-28 02:33:22.433588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:48.822 [2024-11-28 02:33:22.433605] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.822 [2024-11-28 02:33:22.433615] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:48.822 [2024-11-28 02:33:22.433624] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:48.822 BaseBdev1 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.822 02:33:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.203 "name": "raid_bdev1", 00:17:50.203 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:50.203 "strip_size_kb": 0, 00:17:50.203 "state": "online", 00:17:50.203 "raid_level": "raid1", 00:17:50.203 "superblock": true, 00:17:50.203 "num_base_bdevs": 2, 00:17:50.203 "num_base_bdevs_discovered": 1, 00:17:50.203 "num_base_bdevs_operational": 1, 00:17:50.203 "base_bdevs_list": [ 00:17:50.203 { 00:17:50.203 "name": null, 00:17:50.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.203 "is_configured": false, 00:17:50.203 "data_offset": 0, 00:17:50.203 "data_size": 7936 00:17:50.203 }, 00:17:50.203 { 00:17:50.203 "name": "BaseBdev2", 00:17:50.203 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:50.203 "is_configured": true, 00:17:50.203 "data_offset": 256, 00:17:50.203 "data_size": 7936 00:17:50.203 } 00:17:50.203 ] 00:17:50.203 }' 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.203 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.463 "name": "raid_bdev1", 00:17:50.463 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:50.463 "strip_size_kb": 0, 00:17:50.463 "state": "online", 00:17:50.463 "raid_level": "raid1", 00:17:50.463 "superblock": true, 00:17:50.463 "num_base_bdevs": 2, 00:17:50.463 "num_base_bdevs_discovered": 1, 00:17:50.463 "num_base_bdevs_operational": 1, 00:17:50.463 "base_bdevs_list": [ 00:17:50.463 { 00:17:50.463 "name": null, 00:17:50.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.463 "is_configured": false, 00:17:50.463 "data_offset": 0, 00:17:50.463 "data_size": 7936 00:17:50.463 }, 00:17:50.463 { 00:17:50.463 "name": "BaseBdev2", 00:17:50.463 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:50.463 "is_configured": true, 00:17:50.463 "data_offset": 256, 00:17:50.463 "data_size": 7936 00:17:50.463 } 00:17:50.463 ] 00:17:50.463 }' 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.463 02:33:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.463 [2024-11-28 02:33:24.042508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.463 [2024-11-28 02:33:24.042656] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.463 [2024-11-28 02:33:24.042674] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:50.463 request: 00:17:50.463 { 00:17:50.463 "base_bdev": "BaseBdev1", 00:17:50.463 "raid_bdev": "raid_bdev1", 00:17:50.463 "method": "bdev_raid_add_base_bdev", 00:17:50.463 "req_id": 1 00:17:50.463 } 00:17:50.463 Got JSON-RPC error response 00:17:50.463 response: 00:17:50.463 { 00:17:50.463 "code": -22, 00:17:50.463 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:50.463 } 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.463 02:33:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.403 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.661 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.661 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.661 "name": "raid_bdev1", 00:17:51.661 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:51.661 "strip_size_kb": 0, 00:17:51.661 "state": "online", 00:17:51.661 "raid_level": "raid1", 00:17:51.661 "superblock": true, 00:17:51.661 "num_base_bdevs": 2, 00:17:51.661 "num_base_bdevs_discovered": 1, 00:17:51.661 "num_base_bdevs_operational": 1, 00:17:51.661 "base_bdevs_list": [ 00:17:51.661 { 00:17:51.661 "name": null, 00:17:51.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.661 "is_configured": false, 00:17:51.661 "data_offset": 0, 00:17:51.661 "data_size": 7936 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "name": "BaseBdev2", 00:17:51.661 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:51.661 "is_configured": true, 00:17:51.661 "data_offset": 256, 00:17:51.661 "data_size": 7936 00:17:51.661 } 00:17:51.661 ] 00:17:51.661 }' 00:17:51.661 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.661 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.920 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.920 "name": "raid_bdev1", 00:17:51.920 "uuid": "57f1380a-83ee-45ab-be47-e685cbbffcae", 00:17:51.920 "strip_size_kb": 0, 00:17:51.921 "state": "online", 00:17:51.921 "raid_level": "raid1", 00:17:51.921 "superblock": true, 00:17:51.921 "num_base_bdevs": 2, 00:17:51.921 "num_base_bdevs_discovered": 1, 00:17:51.921 "num_base_bdevs_operational": 1, 00:17:51.921 "base_bdevs_list": [ 00:17:51.921 { 00:17:51.921 "name": null, 00:17:51.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.921 "is_configured": false, 00:17:51.921 "data_offset": 0, 00:17:51.921 "data_size": 7936 00:17:51.921 }, 00:17:51.921 { 00:17:51.921 "name": "BaseBdev2", 00:17:51.921 "uuid": "2a80e841-a8b3-5262-9233-b150c07b0ad5", 00:17:51.921 "is_configured": true, 00:17:51.921 "data_offset": 256, 00:17:51.921 "data_size": 7936 00:17:51.921 } 00:17:51.921 ] 00:17:51.921 }' 00:17:51.921 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88756 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88756 ']' 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88756 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88756 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.181 killing process with pid 88756 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88756' 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88756 00:17:52.181 Received shutdown signal, test time was about 60.000000 seconds 00:17:52.181 00:17:52.181 Latency(us) 00:17:52.181 [2024-11-28T02:33:25.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.181 [2024-11-28T02:33:25.860Z] =================================================================================================================== 00:17:52.181 [2024-11-28T02:33:25.860Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.181 [2024-11-28 02:33:25.682092] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.181 [2024-11-28 02:33:25.682245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.181 02:33:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88756 00:17:52.181 [2024-11-28 02:33:25.682309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.181 [2024-11-28 02:33:25.682322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:52.441 [2024-11-28 02:33:25.994493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:53.822 02:33:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:53.822 00:17:53.822 real 0m17.522s 00:17:53.822 user 0m23.051s 00:17:53.822 sys 0m1.599s 00:17:53.822 02:33:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.822 02:33:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.822 ************************************ 00:17:53.822 END TEST raid_rebuild_test_sb_md_interleaved 00:17:53.822 ************************************ 00:17:53.822 02:33:27 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:53.822 02:33:27 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:53.822 02:33:27 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88756 ']' 00:17:53.822 02:33:27 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88756 00:17:53.822 02:33:27 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:53.822 00:17:53.822 real 11m46.832s 00:17:53.822 user 15m53.247s 00:17:53.822 sys 1m49.751s 00:17:53.822 02:33:27 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.822 02:33:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.822 ************************************ 00:17:53.822 END TEST bdev_raid 00:17:53.822 ************************************ 00:17:53.822 02:33:27 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:53.822 02:33:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.822 02:33:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.822 02:33:27 -- common/autotest_common.sh@10 -- # set +x 00:17:53.822 ************************************ 00:17:53.822 START TEST spdkcli_raid 00:17:53.822 ************************************ 00:17:53.822 02:33:27 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:53.822 * Looking for test storage... 00:17:53.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:53.822 02:33:27 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:53.822 02:33:27 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:53.822 02:33:27 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:53.822 02:33:27 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:53.822 02:33:27 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.822 02:33:27 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.822 02:33:27 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.822 02:33:27 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.822 02:33:27 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.822 02:33:27 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.822 02:33:27 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.823 02:33:27 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.823 --rc genhtml_branch_coverage=1 00:17:53.823 --rc genhtml_function_coverage=1 00:17:53.823 --rc genhtml_legend=1 00:17:53.823 --rc geninfo_all_blocks=1 00:17:53.823 --rc geninfo_unexecuted_blocks=1 00:17:53.823 00:17:53.823 ' 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.823 --rc genhtml_branch_coverage=1 00:17:53.823 --rc genhtml_function_coverage=1 00:17:53.823 --rc genhtml_legend=1 00:17:53.823 --rc geninfo_all_blocks=1 00:17:53.823 --rc geninfo_unexecuted_blocks=1 00:17:53.823 00:17:53.823 ' 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.823 --rc genhtml_branch_coverage=1 00:17:53.823 --rc genhtml_function_coverage=1 00:17:53.823 --rc genhtml_legend=1 00:17:53.823 --rc geninfo_all_blocks=1 00:17:53.823 --rc geninfo_unexecuted_blocks=1 00:17:53.823 00:17:53.823 ' 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.823 --rc genhtml_branch_coverage=1 00:17:53.823 --rc genhtml_function_coverage=1 00:17:53.823 --rc genhtml_legend=1 00:17:53.823 --rc geninfo_all_blocks=1 00:17:53.823 --rc geninfo_unexecuted_blocks=1 00:17:53.823 00:17:53.823 ' 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:53.823 02:33:27 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89434 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:53.823 02:33:27 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89434 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89434 ']' 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.823 02:33:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.084 [2024-11-28 02:33:27.581702] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:54.084 [2024-11-28 02:33:27.581816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89434 ] 00:17:54.084 [2024-11-28 02:33:27.757766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:54.343 [2024-11-28 02:33:27.866798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.343 [2024-11-28 02:33:27.866837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.283 02:33:28 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.283 02:33:28 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:17:55.283 02:33:28 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:55.283 02:33:28 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:55.283 02:33:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.283 02:33:28 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:55.283 02:33:28 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:55.283 02:33:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.283 02:33:28 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:55.283 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:55.283 ' 00:17:56.664 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:56.664 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:56.929 02:33:30 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:56.929 02:33:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.929 02:33:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.929 02:33:30 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:56.929 02:33:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.929 02:33:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.929 02:33:30 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:56.929 ' 00:17:57.866 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:58.125 02:33:31 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:58.125 02:33:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.125 02:33:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.125 02:33:31 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:58.125 02:33:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.125 02:33:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.125 02:33:31 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:58.125 02:33:31 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:58.695 02:33:32 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:58.695 02:33:32 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:58.696 02:33:32 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:58.696 02:33:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.696 02:33:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.696 02:33:32 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:58.696 02:33:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.696 02:33:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.696 02:33:32 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:58.696 ' 00:17:59.634 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:59.894 02:33:33 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:59.894 02:33:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.894 02:33:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.894 02:33:33 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:59.894 02:33:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.894 02:33:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.894 02:33:33 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:59.894 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:59.894 ' 00:18:01.275 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:01.275 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:01.275 02:33:34 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:01.275 02:33:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.275 02:33:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.535 02:33:34 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89434 00:18:01.535 02:33:34 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89434 ']' 00:18:01.535 02:33:34 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89434 00:18:01.535 02:33:34 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:01.535 02:33:34 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.535 02:33:34 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89434 00:18:01.535 killing process with pid 89434 00:18:01.535 02:33:35 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.535 02:33:35 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.535 02:33:35 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89434' 00:18:01.535 02:33:35 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89434 00:18:01.535 02:33:35 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89434 00:18:04.102 02:33:37 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:04.102 02:33:37 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89434 ']' 00:18:04.102 Process with pid 89434 is not found 00:18:04.102 02:33:37 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89434 00:18:04.102 02:33:37 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89434 ']' 00:18:04.102 02:33:37 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89434 00:18:04.102 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89434) - No such process 00:18:04.102 02:33:37 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89434 is not found' 00:18:04.102 02:33:37 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:04.102 02:33:37 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:04.102 02:33:37 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:04.102 02:33:37 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:04.102 ************************************ 00:18:04.102 END TEST spdkcli_raid 00:18:04.102 ************************************ 00:18:04.102 00:18:04.102 real 0m10.294s 00:18:04.102 user 0m21.151s 00:18:04.102 sys 0m1.174s 00:18:04.102 02:33:37 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.102 02:33:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.102 02:33:37 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:04.102 02:33:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.102 02:33:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.102 02:33:37 -- common/autotest_common.sh@10 -- # set +x 00:18:04.102 ************************************ 00:18:04.102 START TEST blockdev_raid5f 00:18:04.102 ************************************ 00:18:04.102 02:33:37 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:04.102 * Looking for test storage... 00:18:04.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:04.102 02:33:37 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:04.102 02:33:37 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:04.102 02:33:37 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:04.362 02:33:37 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:04.362 02:33:37 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.363 02:33:37 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:04.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.363 --rc genhtml_branch_coverage=1 00:18:04.363 --rc genhtml_function_coverage=1 00:18:04.363 --rc genhtml_legend=1 00:18:04.363 --rc geninfo_all_blocks=1 00:18:04.363 --rc geninfo_unexecuted_blocks=1 00:18:04.363 00:18:04.363 ' 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:04.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.363 --rc genhtml_branch_coverage=1 00:18:04.363 --rc genhtml_function_coverage=1 00:18:04.363 --rc genhtml_legend=1 00:18:04.363 --rc geninfo_all_blocks=1 00:18:04.363 --rc geninfo_unexecuted_blocks=1 00:18:04.363 00:18:04.363 ' 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:04.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.363 --rc genhtml_branch_coverage=1 00:18:04.363 --rc genhtml_function_coverage=1 00:18:04.363 --rc genhtml_legend=1 00:18:04.363 --rc geninfo_all_blocks=1 00:18:04.363 --rc geninfo_unexecuted_blocks=1 00:18:04.363 00:18:04.363 ' 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:04.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.363 --rc genhtml_branch_coverage=1 00:18:04.363 --rc genhtml_function_coverage=1 00:18:04.363 --rc genhtml_legend=1 00:18:04.363 --rc geninfo_all_blocks=1 00:18:04.363 --rc geninfo_unexecuted_blocks=1 00:18:04.363 00:18:04.363 ' 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89715 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:04.363 02:33:37 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89715 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89715 ']' 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.363 02:33:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:04.363 [2024-11-28 02:33:37.961862] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:04.363 [2024-11-28 02:33:37.962058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89715 ] 00:18:04.624 [2024-11-28 02:33:38.136404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.624 [2024-11-28 02:33:38.269718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:06.009 Malloc0 00:18:06.009 Malloc1 00:18:06.009 Malloc2 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "80df0a9e-a7f9-4711-9de3-56fbf6899ffc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "80df0a9e-a7f9-4711-9de3-56fbf6899ffc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "80df0a9e-a7f9-4711-9de3-56fbf6899ffc",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "55ef723a-2dda-4bf3-89f8-dac3c93a0dd0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "b87389fa-5bb2-43c9-9bfd-2c4888ed0c49",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b9d55d4d-19b0-4494-a0af-6233bec76987",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:06.009 02:33:39 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89715 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89715 ']' 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89715 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89715 00:18:06.009 killing process with pid 89715 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89715' 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89715 00:18:06.009 02:33:39 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89715 00:18:09.307 02:33:42 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:09.307 02:33:42 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:09.307 02:33:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:09.307 02:33:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.307 02:33:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 ************************************ 00:18:09.307 START TEST bdev_hello_world 00:18:09.307 ************************************ 00:18:09.307 02:33:42 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:09.307 [2024-11-28 02:33:42.497682] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:09.307 [2024-11-28 02:33:42.497861] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89782 ] 00:18:09.307 [2024-11-28 02:33:42.672986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.307 [2024-11-28 02:33:42.805302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.877 [2024-11-28 02:33:43.401850] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:09.877 [2024-11-28 02:33:43.401995] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:09.877 [2024-11-28 02:33:43.402029] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:09.877 [2024-11-28 02:33:43.402534] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:09.877 [2024-11-28 02:33:43.402717] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:09.877 [2024-11-28 02:33:43.402762] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:09.877 [2024-11-28 02:33:43.402823] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:09.877 00:18:09.877 [2024-11-28 02:33:43.402875] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:11.261 00:18:11.261 ************************************ 00:18:11.261 END TEST bdev_hello_world 00:18:11.261 ************************************ 00:18:11.261 real 0m2.454s 00:18:11.261 user 0m1.984s 00:18:11.261 sys 0m0.347s 00:18:11.261 02:33:44 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.261 02:33:44 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:11.261 02:33:44 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:11.261 02:33:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:11.261 02:33:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.261 02:33:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:11.261 ************************************ 00:18:11.261 START TEST bdev_bounds 00:18:11.261 ************************************ 00:18:11.261 02:33:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89834 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:11.635 Process bdevio pid: 89834 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89834' 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89834 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89834 ']' 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.635 02:33:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:11.635 [2024-11-28 02:33:45.024476] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:11.635 [2024-11-28 02:33:45.024637] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89834 ] 00:18:11.635 [2024-11-28 02:33:45.199376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:11.893 [2024-11-28 02:33:45.349825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.893 [2024-11-28 02:33:45.350056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.893 [2024-11-28 02:33:45.350259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.463 02:33:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.463 02:33:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:12.463 02:33:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:12.463 I/O targets: 00:18:12.463 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:12.463 00:18:12.463 00:18:12.463 CUnit - A unit testing framework for C - Version 2.1-3 00:18:12.463 http://cunit.sourceforge.net/ 00:18:12.463 00:18:12.463 00:18:12.463 Suite: bdevio tests on: raid5f 00:18:12.463 Test: blockdev write read block ...passed 00:18:12.463 Test: blockdev write zeroes read block ...passed 00:18:12.463 Test: blockdev write zeroes read no split ...passed 00:18:12.723 Test: blockdev write zeroes read split ...passed 00:18:12.723 Test: blockdev write zeroes read split partial ...passed 00:18:12.723 Test: blockdev reset ...passed 00:18:12.723 Test: blockdev write read 8 blocks ...passed 00:18:12.723 Test: blockdev write read size > 128k ...passed 00:18:12.723 Test: blockdev write read invalid size ...passed 00:18:12.723 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:12.723 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:12.723 Test: blockdev write read max offset ...passed 00:18:12.723 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:12.723 Test: blockdev writev readv 8 blocks ...passed 00:18:12.723 Test: blockdev writev readv 30 x 1block ...passed 00:18:12.723 Test: blockdev writev readv block ...passed 00:18:12.723 Test: blockdev writev readv size > 128k ...passed 00:18:12.723 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:12.723 Test: blockdev comparev and writev ...passed 00:18:12.723 Test: blockdev nvme passthru rw ...passed 00:18:12.723 Test: blockdev nvme passthru vendor specific ...passed 00:18:12.723 Test: blockdev nvme admin passthru ...passed 00:18:12.723 Test: blockdev copy ...passed 00:18:12.723 00:18:12.723 Run Summary: Type Total Ran Passed Failed Inactive 00:18:12.723 suites 1 1 n/a 0 0 00:18:12.723 tests 23 23 23 0 0 00:18:12.723 asserts 130 130 130 0 n/a 00:18:12.723 00:18:12.723 Elapsed time = 0.623 seconds 00:18:12.723 0 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89834 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89834 ']' 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89834 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89834 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89834' 00:18:12.723 killing process with pid 89834 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89834 00:18:12.723 02:33:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89834 00:18:14.633 02:33:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:14.633 00:18:14.633 real 0m2.899s 00:18:14.633 user 0m7.092s 00:18:14.633 sys 0m0.472s 00:18:14.633 02:33:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.633 02:33:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:14.633 ************************************ 00:18:14.633 END TEST bdev_bounds 00:18:14.633 ************************************ 00:18:14.633 02:33:47 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:14.633 02:33:47 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:14.633 02:33:47 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.633 02:33:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:14.633 ************************************ 00:18:14.633 START TEST bdev_nbd 00:18:14.633 ************************************ 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89895 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89895 /var/tmp/spdk-nbd.sock 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89895 ']' 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:14.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.633 02:33:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:14.633 [2024-11-28 02:33:48.028090] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:14.633 [2024-11-28 02:33:48.028263] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.633 [2024-11-28 02:33:48.211071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.893 [2024-11-28 02:33:48.344314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:15.463 02:33:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.722 1+0 records in 00:18:15.722 1+0 records out 00:18:15.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495245 s, 8.3 MB/s 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:15.722 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:15.982 { 00:18:15.982 "nbd_device": "/dev/nbd0", 00:18:15.982 "bdev_name": "raid5f" 00:18:15.982 } 00:18:15.982 ]' 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:15.982 { 00:18:15.982 "nbd_device": "/dev/nbd0", 00:18:15.982 "bdev_name": "raid5f" 00:18:15.982 } 00:18:15.982 ]' 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.982 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:16.241 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:16.241 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:16.241 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:16.241 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.241 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.241 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:16.241 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:16.241 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.241 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:16.241 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.242 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.501 02:33:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:16.761 /dev/nbd0 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.761 1+0 records in 00:18:16.761 1+0 records out 00:18:16.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533505 s, 7.7 MB/s 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.761 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:17.020 { 00:18:17.020 "nbd_device": "/dev/nbd0", 00:18:17.020 "bdev_name": "raid5f" 00:18:17.020 } 00:18:17.020 ]' 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:17.020 { 00:18:17.020 "nbd_device": "/dev/nbd0", 00:18:17.020 "bdev_name": "raid5f" 00:18:17.020 } 00:18:17.020 ]' 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:17.020 256+0 records in 00:18:17.020 256+0 records out 00:18:17.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142085 s, 73.8 MB/s 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:17.020 256+0 records in 00:18:17.020 256+0 records out 00:18:17.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313733 s, 33.4 MB/s 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:17.020 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:17.021 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:17.021 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:17.021 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:17.021 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:17.021 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:17.021 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:17.021 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:17.021 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:17.021 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.021 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:17.280 02:33:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:17.539 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:17.798 malloc_lvol_verify 00:18:17.798 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:18.058 3b439b5d-0f85-4776-a43a-214d48609bd3 00:18:18.058 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:18.058 d2efcb61-5b20-4822-96c3-3689f7ee8725 00:18:18.058 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:18.318 /dev/nbd0 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:18.318 mke2fs 1.47.0 (5-Feb-2023) 00:18:18.318 Discarding device blocks: 0/4096 done 00:18:18.318 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:18.318 00:18:18.318 Allocating group tables: 0/1 done 00:18:18.318 Writing inode tables: 0/1 done 00:18:18.318 Creating journal (1024 blocks): done 00:18:18.318 Writing superblocks and filesystem accounting information: 0/1 done 00:18:18.318 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.318 02:33:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89895 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89895 ']' 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89895 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89895 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.578 killing process with pid 89895 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89895' 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89895 00:18:18.578 02:33:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89895 00:18:20.488 02:33:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:20.488 00:18:20.488 real 0m5.774s 00:18:20.488 user 0m7.615s 00:18:20.488 sys 0m1.383s 00:18:20.488 02:33:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.488 02:33:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:20.488 ************************************ 00:18:20.488 END TEST bdev_nbd 00:18:20.489 ************************************ 00:18:20.489 02:33:53 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:20.489 02:33:53 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:18:20.489 02:33:53 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:18:20.489 02:33:53 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:18:20.489 02:33:53 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:20.489 02:33:53 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.489 02:33:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:20.489 ************************************ 00:18:20.489 START TEST bdev_fio 00:18:20.489 ************************************ 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:20.489 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:20.489 ************************************ 00:18:20.489 START TEST bdev_fio_rw_verify 00:18:20.489 ************************************ 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:20.489 02:33:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:20.749 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:20.749 fio-3.35 00:18:20.749 Starting 1 thread 00:18:32.980 00:18:32.980 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90097: Thu Nov 28 02:34:05 2024 00:18:32.980 read: IOPS=12.4k, BW=48.5MiB/s (50.8MB/s)(485MiB/10000msec) 00:18:32.980 slat (nsec): min=17776, max=60127, avg=19454.49, stdev=1818.24 00:18:32.980 clat (usec): min=8, max=340, avg=131.03, stdev=45.92 00:18:32.980 lat (usec): min=28, max=370, avg=150.49, stdev=46.15 00:18:32.980 clat percentiles (usec): 00:18:32.980 | 50.000th=[ 135], 99.000th=[ 217], 99.900th=[ 241], 99.990th=[ 293], 00:18:32.980 | 99.999th=[ 326] 00:18:32.980 write: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(501MiB/9875msec); 0 zone resets 00:18:32.980 slat (usec): min=7, max=319, avg=15.95, stdev= 3.56 00:18:32.980 clat (usec): min=58, max=1207, avg=295.87, stdev=38.61 00:18:32.980 lat (usec): min=73, max=1456, avg=311.82, stdev=39.51 00:18:32.980 clat percentiles (usec): 00:18:32.980 | 50.000th=[ 302], 99.000th=[ 371], 99.900th=[ 570], 99.990th=[ 955], 00:18:32.980 | 99.999th=[ 1123] 00:18:32.980 bw ( KiB/s): min=49168, max=54368, per=98.81%, avg=51346.95, stdev=1479.85, samples=19 00:18:32.980 iops : min=12292, max=13592, avg=12836.74, stdev=369.96, samples=19 00:18:32.980 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=16.10%, 250=39.40% 00:18:32.980 lat (usec) : 500=44.41%, 750=0.06%, 1000=0.02% 00:18:32.980 lat (msec) : 2=0.01% 00:18:32.980 cpu : usr=98.93%, sys=0.39%, ctx=22, majf=0, minf=10145 00:18:32.980 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.980 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.980 issued rwts: total=124108,128290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.980 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:32.980 00:18:32.980 Run status group 0 (all jobs): 00:18:32.980 READ: bw=48.5MiB/s (50.8MB/s), 48.5MiB/s-48.5MiB/s (50.8MB/s-50.8MB/s), io=485MiB (508MB), run=10000-10000msec 00:18:32.980 WRITE: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=501MiB (525MB), run=9875-9875msec 00:18:33.551 ----------------------------------------------------- 00:18:33.551 Suppressions used: 00:18:33.551 count bytes template 00:18:33.551 1 7 /usr/src/fio/parse.c 00:18:33.551 337 32352 /usr/src/fio/iolog.c 00:18:33.551 1 8 libtcmalloc_minimal.so 00:18:33.551 1 904 libcrypto.so 00:18:33.551 ----------------------------------------------------- 00:18:33.551 00:18:33.551 00:18:33.551 real 0m13.053s 00:18:33.551 user 0m13.179s 00:18:33.551 sys 0m0.763s 00:18:33.551 02:34:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.551 02:34:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:33.551 ************************************ 00:18:33.551 END TEST bdev_fio_rw_verify 00:18:33.551 ************************************ 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "80df0a9e-a7f9-4711-9de3-56fbf6899ffc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "80df0a9e-a7f9-4711-9de3-56fbf6899ffc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "80df0a9e-a7f9-4711-9de3-56fbf6899ffc",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "55ef723a-2dda-4bf3-89f8-dac3c93a0dd0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "b87389fa-5bb2-43c9-9bfd-2c4888ed0c49",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b9d55d4d-19b0-4494-a0af-6233bec76987",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:33.551 /home/vagrant/spdk_repo/spdk 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:33.551 00:18:33.551 real 0m13.362s 00:18:33.551 user 0m13.311s 00:18:33.551 sys 0m0.911s 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.551 02:34:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:33.551 ************************************ 00:18:33.551 END TEST bdev_fio 00:18:33.551 ************************************ 00:18:33.551 02:34:07 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:33.551 02:34:07 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:33.551 02:34:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:33.551 02:34:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.551 02:34:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:33.551 ************************************ 00:18:33.551 START TEST bdev_verify 00:18:33.551 ************************************ 00:18:33.551 02:34:07 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:33.811 [2024-11-28 02:34:07.303476] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:33.811 [2024-11-28 02:34:07.304379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90261 ] 00:18:34.072 [2024-11-28 02:34:07.511061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:34.072 [2024-11-28 02:34:07.647028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.072 [2024-11-28 02:34:07.647054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.641 Running I/O for 5 seconds... 00:18:36.960 10392.00 IOPS, 40.59 MiB/s [2024-11-28T02:34:11.591Z] 10559.50 IOPS, 41.25 MiB/s [2024-11-28T02:34:12.530Z] 10580.33 IOPS, 41.33 MiB/s [2024-11-28T02:34:13.470Z] 10590.75 IOPS, 41.37 MiB/s [2024-11-28T02:34:13.470Z] 10574.40 IOPS, 41.31 MiB/s 00:18:39.791 Latency(us) 00:18:39.791 [2024-11-28T02:34:13.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.791 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:39.791 Verification LBA range: start 0x0 length 0x2000 00:18:39.791 raid5f : 5.02 6348.53 24.80 0.00 0.00 30347.10 330.90 25413.09 00:18:39.791 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:39.791 Verification LBA range: start 0x2000 length 0x2000 00:18:39.791 raid5f : 5.03 4227.04 16.51 0.00 0.00 45604.63 119.84 32510.43 00:18:39.791 [2024-11-28T02:34:13.470Z] =================================================================================================================== 00:18:39.791 [2024-11-28T02:34:13.470Z] Total : 10575.57 41.31 0.00 0.00 36449.48 119.84 32510.43 00:18:41.173 00:18:41.173 real 0m7.531s 00:18:41.173 user 0m13.770s 00:18:41.173 sys 0m0.397s 00:18:41.173 02:34:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.173 02:34:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:41.173 ************************************ 00:18:41.173 END TEST bdev_verify 00:18:41.173 ************************************ 00:18:41.173 02:34:14 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:41.173 02:34:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:41.173 02:34:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.173 02:34:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:41.173 ************************************ 00:18:41.173 START TEST bdev_verify_big_io 00:18:41.173 ************************************ 00:18:41.173 02:34:14 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:41.433 [2024-11-28 02:34:14.897278] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:41.433 [2024-11-28 02:34:14.897398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90364 ] 00:18:41.433 [2024-11-28 02:34:15.070474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:41.693 [2024-11-28 02:34:15.206723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.693 [2024-11-28 02:34:15.206752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.261 Running I/O for 5 seconds... 00:18:44.213 633.00 IOPS, 39.56 MiB/s [2024-11-28T02:34:19.275Z] 760.00 IOPS, 47.50 MiB/s [2024-11-28T02:34:20.215Z] 761.33 IOPS, 47.58 MiB/s [2024-11-28T02:34:21.156Z] 776.50 IOPS, 48.53 MiB/s [2024-11-28T02:34:21.156Z] 761.60 IOPS, 47.60 MiB/s 00:18:47.477 Latency(us) 00:18:47.477 [2024-11-28T02:34:21.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.477 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:47.477 Verification LBA range: start 0x0 length 0x200 00:18:47.477 raid5f : 5.19 440.30 27.52 0.00 0.00 7292373.15 298.70 320525.41 00:18:47.477 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:47.477 Verification LBA range: start 0x200 length 0x200 00:18:47.477 raid5f : 5.21 341.32 21.33 0.00 0.00 9339981.83 200.33 406609.38 00:18:47.477 [2024-11-28T02:34:21.156Z] =================================================================================================================== 00:18:47.477 [2024-11-28T02:34:21.156Z] Total : 781.62 48.85 0.00 0.00 8188201.95 200.33 406609.38 00:18:48.859 00:18:48.860 real 0m7.679s 00:18:48.860 user 0m14.151s 00:18:48.860 sys 0m0.358s 00:18:48.860 02:34:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.860 02:34:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.860 ************************************ 00:18:48.860 END TEST bdev_verify_big_io 00:18:48.860 ************************************ 00:18:49.120 02:34:22 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:49.120 02:34:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:49.120 02:34:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.120 02:34:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:49.120 ************************************ 00:18:49.120 START TEST bdev_write_zeroes 00:18:49.120 ************************************ 00:18:49.120 02:34:22 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:49.120 [2024-11-28 02:34:22.657968] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:49.120 [2024-11-28 02:34:22.658103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90458 ] 00:18:49.380 [2024-11-28 02:34:22.831884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.380 [2024-11-28 02:34:22.964971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.950 Running I/O for 1 seconds... 00:18:51.331 29559.00 IOPS, 115.46 MiB/s 00:18:51.331 Latency(us) 00:18:51.331 [2024-11-28T02:34:25.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.331 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:51.331 raid5f : 1.01 29532.16 115.36 0.00 0.00 4320.87 1352.22 5866.76 00:18:51.331 [2024-11-28T02:34:25.010Z] =================================================================================================================== 00:18:51.331 [2024-11-28T02:34:25.011Z] Total : 29532.16 115.36 0.00 0.00 4320.87 1352.22 5866.76 00:18:52.724 00:18:52.724 real 0m3.479s 00:18:52.724 user 0m2.977s 00:18:52.724 sys 0m0.374s 00:18:52.724 02:34:26 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.724 02:34:26 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:52.724 ************************************ 00:18:52.724 END TEST bdev_write_zeroes 00:18:52.724 ************************************ 00:18:52.724 02:34:26 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:52.724 02:34:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:52.724 02:34:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.724 02:34:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:52.724 ************************************ 00:18:52.724 START TEST bdev_json_nonenclosed 00:18:52.724 ************************************ 00:18:52.724 02:34:26 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:52.724 [2024-11-28 02:34:26.215648] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:52.724 [2024-11-28 02:34:26.215769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90517 ] 00:18:52.724 [2024-11-28 02:34:26.390014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.983 [2024-11-28 02:34:26.520586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.983 [2024-11-28 02:34:26.520689] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:52.983 [2024-11-28 02:34:26.520718] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:52.983 [2024-11-28 02:34:26.520729] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:53.243 00:18:53.243 real 0m0.653s 00:18:53.243 user 0m0.405s 00:18:53.243 sys 0m0.143s 00:18:53.243 02:34:26 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.243 02:34:26 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:53.243 ************************************ 00:18:53.243 END TEST bdev_json_nonenclosed 00:18:53.243 ************************************ 00:18:53.243 02:34:26 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:53.243 02:34:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:53.243 02:34:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.243 02:34:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.243 ************************************ 00:18:53.243 START TEST bdev_json_nonarray 00:18:53.243 ************************************ 00:18:53.243 02:34:26 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:53.504 [2024-11-28 02:34:26.932070] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:53.504 [2024-11-28 02:34:26.932218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90542 ] 00:18:53.504 [2024-11-28 02:34:27.107367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.764 [2024-11-28 02:34:27.242106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.764 [2024-11-28 02:34:27.242218] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:53.764 [2024-11-28 02:34:27.242237] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:53.764 [2024-11-28 02:34:27.242257] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:54.024 00:18:54.024 real 0m0.658s 00:18:54.024 user 0m0.418s 00:18:54.024 sys 0m0.136s 00:18:54.024 02:34:27 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.024 02:34:27 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:54.024 ************************************ 00:18:54.024 END TEST bdev_json_nonarray 00:18:54.024 ************************************ 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:54.024 02:34:27 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:54.024 00:18:54.024 real 0m49.968s 00:18:54.024 user 1m6.380s 00:18:54.024 sys 0m5.843s 00:18:54.024 02:34:27 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.024 02:34:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.024 ************************************ 00:18:54.024 END TEST blockdev_raid5f 00:18:54.024 ************************************ 00:18:54.024 02:34:27 -- spdk/autotest.sh@194 -- # uname -s 00:18:54.024 02:34:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:54.024 02:34:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:54.024 02:34:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:54.024 02:34:27 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:54.024 02:34:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.024 02:34:27 -- common/autotest_common.sh@10 -- # set +x 00:18:54.024 02:34:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:18:54.024 02:34:27 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:54.024 02:34:27 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:54.024 02:34:27 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:18:54.024 02:34:27 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:18:54.024 02:34:27 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:18:54.024 02:34:27 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:18:54.024 02:34:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.025 02:34:27 -- common/autotest_common.sh@10 -- # set +x 00:18:54.284 02:34:27 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:18:54.284 02:34:27 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:18:54.284 02:34:27 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:18:54.284 02:34:27 -- common/autotest_common.sh@10 -- # set +x 00:18:56.827 INFO: APP EXITING 00:18:56.827 INFO: killing all VMs 00:18:56.827 INFO: killing vhost app 00:18:56.827 INFO: EXIT DONE 00:18:56.827 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:57.088 Waiting for block devices as requested 00:18:57.088 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:57.088 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:58.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:58.026 Cleaning 00:18:58.026 Removing: /var/run/dpdk/spdk0/config 00:18:58.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:58.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:58.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:58.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:58.027 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:58.027 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:58.027 Removing: /dev/shm/spdk_tgt_trace.pid56827 00:18:58.027 Removing: /var/run/dpdk/spdk0 00:18:58.027 Removing: /var/run/dpdk/spdk_pid56592 00:18:58.027 Removing: /var/run/dpdk/spdk_pid56827 00:18:58.027 Removing: /var/run/dpdk/spdk_pid57056 00:18:58.027 Removing: /var/run/dpdk/spdk_pid57160 00:18:58.027 Removing: /var/run/dpdk/spdk_pid57216 00:18:58.027 Removing: /var/run/dpdk/spdk_pid57350 00:18:58.027 Removing: /var/run/dpdk/spdk_pid57373 00:18:58.286 Removing: /var/run/dpdk/spdk_pid57583 00:18:58.286 Removing: /var/run/dpdk/spdk_pid57689 00:18:58.286 Removing: /var/run/dpdk/spdk_pid57796 00:18:58.286 Removing: /var/run/dpdk/spdk_pid57918 00:18:58.286 Removing: /var/run/dpdk/spdk_pid58026 00:18:58.286 Removing: /var/run/dpdk/spdk_pid58066 00:18:58.286 Removing: /var/run/dpdk/spdk_pid58102 00:18:58.286 Removing: /var/run/dpdk/spdk_pid58173 00:18:58.286 Removing: /var/run/dpdk/spdk_pid58301 00:18:58.286 Removing: /var/run/dpdk/spdk_pid58742 00:18:58.286 Removing: /var/run/dpdk/spdk_pid58812 00:18:58.286 Removing: /var/run/dpdk/spdk_pid58886 00:18:58.286 Removing: /var/run/dpdk/spdk_pid58907 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59046 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59071 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59212 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59228 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59298 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59323 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59387 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59405 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59600 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59637 00:18:58.286 Removing: /var/run/dpdk/spdk_pid59726 00:18:58.286 Removing: /var/run/dpdk/spdk_pid61059 00:18:58.286 Removing: /var/run/dpdk/spdk_pid61265 00:18:58.286 Removing: /var/run/dpdk/spdk_pid61411 00:18:58.286 Removing: /var/run/dpdk/spdk_pid62043 00:18:58.286 Removing: /var/run/dpdk/spdk_pid62250 00:18:58.286 Removing: /var/run/dpdk/spdk_pid62395 00:18:58.286 Removing: /var/run/dpdk/spdk_pid63027 00:18:58.286 Removing: /var/run/dpdk/spdk_pid63357 00:18:58.286 Removing: /var/run/dpdk/spdk_pid63497 00:18:58.286 Removing: /var/run/dpdk/spdk_pid64878 00:18:58.286 Removing: /var/run/dpdk/spdk_pid65131 00:18:58.286 Removing: /var/run/dpdk/spdk_pid65271 00:18:58.286 Removing: /var/run/dpdk/spdk_pid66656 00:18:58.286 Removing: /var/run/dpdk/spdk_pid66916 00:18:58.286 Removing: /var/run/dpdk/spdk_pid67056 00:18:58.286 Removing: /var/run/dpdk/spdk_pid68442 00:18:58.286 Removing: /var/run/dpdk/spdk_pid68888 00:18:58.286 Removing: /var/run/dpdk/spdk_pid69028 00:18:58.286 Removing: /var/run/dpdk/spdk_pid70513 00:18:58.286 Removing: /var/run/dpdk/spdk_pid70778 00:18:58.286 Removing: /var/run/dpdk/spdk_pid70928 00:18:58.286 Removing: /var/run/dpdk/spdk_pid72408 00:18:58.286 Removing: /var/run/dpdk/spdk_pid72677 00:18:58.286 Removing: /var/run/dpdk/spdk_pid72817 00:18:58.286 Removing: /var/run/dpdk/spdk_pid74300 00:18:58.286 Removing: /var/run/dpdk/spdk_pid74793 00:18:58.286 Removing: /var/run/dpdk/spdk_pid74933 00:18:58.286 Removing: /var/run/dpdk/spdk_pid75082 00:18:58.286 Removing: /var/run/dpdk/spdk_pid75501 00:18:58.286 Removing: /var/run/dpdk/spdk_pid76230 00:18:58.286 Removing: /var/run/dpdk/spdk_pid76620 00:18:58.545 Removing: /var/run/dpdk/spdk_pid77308 00:18:58.545 Removing: /var/run/dpdk/spdk_pid77744 00:18:58.545 Removing: /var/run/dpdk/spdk_pid78504 00:18:58.545 Removing: /var/run/dpdk/spdk_pid78913 00:18:58.545 Removing: /var/run/dpdk/spdk_pid80896 00:18:58.545 Removing: /var/run/dpdk/spdk_pid81334 00:18:58.545 Removing: /var/run/dpdk/spdk_pid81774 00:18:58.545 Removing: /var/run/dpdk/spdk_pid83854 00:18:58.545 Removing: /var/run/dpdk/spdk_pid84341 00:18:58.545 Removing: /var/run/dpdk/spdk_pid84860 00:18:58.545 Removing: /var/run/dpdk/spdk_pid85914 00:18:58.545 Removing: /var/run/dpdk/spdk_pid86237 00:18:58.545 Removing: /var/run/dpdk/spdk_pid87176 00:18:58.545 Removing: /var/run/dpdk/spdk_pid87499 00:18:58.545 Removing: /var/run/dpdk/spdk_pid88434 00:18:58.545 Removing: /var/run/dpdk/spdk_pid88756 00:18:58.545 Removing: /var/run/dpdk/spdk_pid89434 00:18:58.545 Removing: /var/run/dpdk/spdk_pid89715 00:18:58.545 Removing: /var/run/dpdk/spdk_pid89782 00:18:58.545 Removing: /var/run/dpdk/spdk_pid89834 00:18:58.545 Removing: /var/run/dpdk/spdk_pid90082 00:18:58.545 Removing: /var/run/dpdk/spdk_pid90261 00:18:58.545 Removing: /var/run/dpdk/spdk_pid90364 00:18:58.545 Removing: /var/run/dpdk/spdk_pid90458 00:18:58.545 Removing: /var/run/dpdk/spdk_pid90517 00:18:58.545 Removing: /var/run/dpdk/spdk_pid90542 00:18:58.545 Clean 00:18:58.545 02:34:32 -- common/autotest_common.sh@1453 -- # return 0 00:18:58.545 02:34:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:18:58.545 02:34:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.545 02:34:32 -- common/autotest_common.sh@10 -- # set +x 00:18:58.545 02:34:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:18:58.545 02:34:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.545 02:34:32 -- common/autotest_common.sh@10 -- # set +x 00:18:58.804 02:34:32 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:58.804 02:34:32 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:58.804 02:34:32 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:58.804 02:34:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:18:58.804 02:34:32 -- spdk/autotest.sh@398 -- # hostname 00:18:58.804 02:34:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:58.804 geninfo: WARNING: invalid characters removed from testname! 00:19:20.859 02:34:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:21.799 02:34:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:23.709 02:34:57 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:25.619 02:34:59 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:27.527 02:35:01 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:30.065 02:35:03 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:31.973 02:35:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:31.973 02:35:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:19:31.973 02:35:05 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:19:31.973 02:35:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:31.973 02:35:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:31.973 02:35:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:31.973 + [[ -n 5439 ]] 00:19:31.973 + sudo kill 5439 00:19:31.982 [Pipeline] } 00:19:31.998 [Pipeline] // timeout 00:19:32.003 [Pipeline] } 00:19:32.018 [Pipeline] // stage 00:19:32.024 [Pipeline] } 00:19:32.038 [Pipeline] // catchError 00:19:32.047 [Pipeline] stage 00:19:32.050 [Pipeline] { (Stop VM) 00:19:32.064 [Pipeline] sh 00:19:32.345 + vagrant halt 00:19:34.885 ==> default: Halting domain... 00:19:43.033 [Pipeline] sh 00:19:43.318 + vagrant destroy -f 00:19:45.862 ==> default: Removing domain... 00:19:45.912 [Pipeline] sh 00:19:46.243 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:19:46.253 [Pipeline] } 00:19:46.269 [Pipeline] // stage 00:19:46.275 [Pipeline] } 00:19:46.289 [Pipeline] // dir 00:19:46.295 [Pipeline] } 00:19:46.310 [Pipeline] // wrap 00:19:46.316 [Pipeline] } 00:19:46.328 [Pipeline] // catchError 00:19:46.337 [Pipeline] stage 00:19:46.339 [Pipeline] { (Epilogue) 00:19:46.352 [Pipeline] sh 00:19:46.638 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:50.849 [Pipeline] catchError 00:19:50.851 [Pipeline] { 00:19:50.865 [Pipeline] sh 00:19:51.152 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:51.152 Artifacts sizes are good 00:19:51.162 [Pipeline] } 00:19:51.177 [Pipeline] // catchError 00:19:51.189 [Pipeline] archiveArtifacts 00:19:51.197 Archiving artifacts 00:19:51.303 [Pipeline] cleanWs 00:19:51.316 [WS-CLEANUP] Deleting project workspace... 00:19:51.316 [WS-CLEANUP] Deferred wipeout is used... 00:19:51.323 [WS-CLEANUP] done 00:19:51.325 [Pipeline] } 00:19:51.341 [Pipeline] // stage 00:19:51.346 [Pipeline] } 00:19:51.362 [Pipeline] // node 00:19:51.368 [Pipeline] End of Pipeline 00:19:51.426 Finished: SUCCESS